Japan Enhances AI Sovereignty With Advanced ABCI 3.0 Supercomputer

Japan Enhances AI Sovereignty With Advanced ABCI 3.0 Supercomputer

Enhancing Japan’s AI sovereignty and strengthening its research and development capabilities, Japan’s National Institute of Advanced Industrial Science and Technology (AIST) will integrate thousands of NVIDIA H200 Tensor Core GPUs into its AI Bridging Cloud Infrastructure 3.0 supercomputer (ABCI 3.0). The HPE Cray XD system will feature NVIDIA Quantum-2 InfiniBand networking for superior performance and scalability.

ABCI 3.0 is the latest iteration of Japan’s large-scale Open AI Computing Infrastructure designed to advance AI R&D. This collaboration underlines Japan’s commitment to advancing its AI capabilities and fortifying its technological independence.

“In August 2018, we launched ABCI, the world’s first large-scale open AI computing infrastructure,” said AIST Executive Officer Yoshio Tanaka. “Building on our experience over the past several years managing ABCI, we’re now upgrading to ABCI 3.0. In collaboration with NVIDIA we aim to develop ABCI 3.0 into a computing infrastructure that will advance further research and development capabilities for generative AI in Japan.”

“As generative AI prepares to catalyze global change, it’s crucial to rapidly cultivate research and development capabilities within Japan,” said AIST Solutions Co. Producer and Head of ABCI Operations Hirotaka Ogawa. “I’m confident that this major upgrade of ABCI in our collaboration with NVIDIA and HPE will enhance ABCI’s leadership in domestic industry and academia, propelling Japan towards global competitiveness in AI development and serving as the bedrock for future innovation.”

The ABCI 3.0 supercomputer will be housed in Kashiwa at a facility run by Japan’s National Institute of Advanced Industrial Science and Technology. Credit: Courtesy of National Institute of Advanced Industrial Science and Technology.

ABCI 3.0: A New Era for Japanese AI Research and Development

ABCI 3.0 is constructed and operated by AIST, its business subsidiary, AIST Solutions, and its system integrator, Hewlett Packard Enterprise (HPE).

The ABCI 3.0 project follows support from Japan’s Ministry of Economy, Trade and Industry, known as METI, for strengthening its computing resources through the Economic Security Fund and is part of a broader $1 billion initiative by METI that includes both ABCI efforts and investments in cloud AI computing.

NVIDIA is closely collaborating with METI on research and education following a visit last year by company founder and CEO, Jensen Huang, who met with political and business leaders, including Japanese Prime Minister Fumio Kishida, to discuss the future of AI.

NVIDIA’s Commitment to Japan’s Future

Huang pledged to collaborate on research, particularly in generative AI, robotics and quantum computing, to invest in AI startups and provide product support, training and education on AI.

During his visit, Huang emphasized that “AI factories” — next-generation data centers designed to handle the most computationally intensive AI tasks — are crucial for turning vast amounts of data into intelligence.

“The AI factory will become the bedrock of modern economies across the world,” Huang said during a meeting with the Japanese press in December.

With its ultra-high-density data center and energy-efficient design, ABCI provides a robust infrastructure for developing AI and big data applications.

The system is expected to come online by the end of this year and offer state-of-the-art AI research and development resources. It will be housed in Kashiwa, near Tokyo.

Unmatched Computing Performance and Efficiency

The facility will offer:

  • 6 AI exaflops of computing capacity, a measure of AI-specific performance without sparsity
  • 410 double-precision petaflops, a measure of general computing capacity
  • Each node is connected via the Quantum-2 InfiniBand platform at 200GB/s of bisectional bandwidth.

NVIDIA technology forms the backbone of this initiative, with hundreds of nodes each equipped with 8 NVLlink-connected H200 GPUs providing unprecedented computational performance and efficiency.

NVIDIA H200 is the first GPU to offer over 140 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s). The H200’s larger and faster memory accelerates generative AI and LLMs, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.

NVIDIA H200 GPUs are 15X more energy-efficient than ABCI’s previous-generation architecture for AI workloads such as LLM token generation.

The integration of advanced NVIDIA Quantum-2 InfiniBand with In-Network computing — where networking devices perform computations on data, offloading the work from the CPU — ensures efficient, high-speed, low-latency communication, crucial for handling intensive AI workloads and vast datasets.

ABCI boasts world-class computing and data processing power, serving as a platform to accelerate joint AI R&D with industries, academia and governments.

METI’s substantial investment is a testament to Japan’s strategic vision to enhance AI development capabilities and accelerate the use of generative AI.

By subsidizing AI supercomputer development, Japan aims to reduce the time and costs of developing next-generation AI technologies, positioning itself as a leader in the global AI landscape.

Read More

Paige Cofounder Thomas Fuchs’ Diagnosis on Improving Cancer Patient Outcomes With AI

Paige Cofounder Thomas Fuchs’ Diagnosis on Improving Cancer Patient Outcomes With AI

Improved cancer diagnostics — and improved patient outcomes — could be among the changes generative AI will bring to the healthcare industry, thanks to Paige, the first company with an FDA-approved tool for cancer diagnosis. In this episode of NVIDIA’s AI Podcast, host Noah Kravitz speaks with Paige cofounder and Chief Scientific Officer Thomas Fuchs. He’s also dean of artificial intelligence and human health at the Icahn School of Medicine at Mount Sinai.

Tune in to hear Fuchs on machine learning and AI applications and how technology brings better precision and care to the medical industry.

Time Stamps

1:03: Background on Paige and computational pathology
7:28: How AI models use visual pattern recognition to accelerate cancer detection
11:27: Paige’s results using AI in cancer imaging and pathology
15:16: Challenges in cancer detection
17:38: Thomas Fuchs’ background in engineering at JPL and NASA
24:10: AI’s future in the medical industry

You Might Also Like:

Dotlumen CEO Cornel Amariei on Assistive Technology for the Visually Impaired – Ep. 217

NVIDIA Inception program member Dotlumen is building AI glasses to help people with visual impairments navigate the world. CEO and founder Cornel Amariei discusses the processes of developing assistive technology and its potential for enhancing accessibility.

Personalized Health: Viome’s Guru Banavar Discusses Startup’s AI-Driven Approach – Ep. 216

Viome CTO Guru Banavar discusses innovations in AI and genomics and how technology has advanced personalized health and wellness. Viome aims to tackle the root causes of chronic diseases by analyzing microbiomes and gene expression, transforming biological data into practical recommendations for a holistic approach to wellness.

Cardiac Clarity: Dr. Keith Channon Talks Revolutionizing Heart Health With AI – Ep. 212

Caristo Diagnostics has developed an AI-powered solution for detecting coronary inflammation in cardiac CT scans. Dr. Keith Channon, cofounder and chief medical officer, discusses how Caristo uses AI to improve treatment plans and risk predictions by providing patient-specific readouts.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Read More

Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI

Mission NIMpossible: Decoding the Microservices That Accelerate Generative AI

In the rapidly evolving world of artificial intelligence, generative AI is captivating imaginations and transforming industries. Behind the scenes, an unsung hero is making it all possible: microservices architecture.

The Building Blocks of Modern AI Applications

Microservices have emerged as a powerful architecture, fundamentally changing how people design, build and deploy software.

A microservices architecture breaks down an application into a collection of loosely coupled, independently deployable services. Each service is responsible for a specific capability and communicates with other services through well-defined application programming interfaces, or APIs. This modular approach stands in stark contrast to traditional all-in-one architectures, in which all functionality is bundled into a single, tightly integrated application.

By decoupling services, teams can work on different components simultaneously, accelerating development processes and allowing updates to be rolled out independently without affecting the entire application. Developers can focus on building and improving specific services, leading to better code quality and faster problem resolution. Such specialization allows developers to become experts in their particular domain.

Services can be scaled independently based on demand, optimizing resource utilization and improving overall system performance. In addition, different services can use different technologies, allowing developers to choose the best tools for each specific task.

A Perfect Match: Microservices and Generative AI

The microservices architecture is particularly well-suited for developing generative AI applications due to its scalability, enhanced modularity and flexibility.

AI models, especially large language models, require significant computational resources. Microservices allow for efficient scaling of these resource-intensive components without affecting the entire system.

Generative AI applications often involve multiple steps, such as data preprocessing, model inference and post-processing. Microservices enable each step to be developed, optimized and scaled independently. Plus, as AI models and techniques evolve rapidly, a microservices architecture allows for easier integration of new models as well as the replacement of existing ones without disrupting the entire application.

NVIDIA NIM: Simplifying Generative AI Deployment

As the demand for AI-powered applications grows, developers face challenges in efficiently deploying and managing AI models.

NVIDIA NIM inference microservices provide models as optimized containers to deploy in the cloud, data centers, workstations, desktops and laptops. Each NIM container includes the pretrained AI models and all the necessary runtime components, making it simple to integrate AI capabilities into applications.

NIM offers a game-changing approach for application developers looking to incorporate AI functionality by providing simplified integration, production-readiness and flexibility. Developers can focus on building their applications without worrying about the complexities of data preparation, model training or customization, as NIM inference microservices are optimized for performance, come with runtime optimizations and support industry-standard APIs.

AI at Your Fingertips: NVIDIA NIM on Workstations and PCs

Building enterprise generative AI applications comes with many challenges. While cloud-hosted model APIs can help developers get started, issues related to data privacy, security, model response latency, accuracy, API costs and scaling often hinder the path to production.

Workstations with NIM provide developers with secure access to a broad range of models and performance-optimized inference microservices.

By avoiding the latency, cost and compliance concerns associated with cloud-hosted APIs as well as the complexities of model deployment, developers can focus on application development. This accelerates the delivery of production-ready generative AI applications — enabling seamless, automatic scale out with performance optimization in data centers and the cloud.

The recently announced general availability of the Meta Llama 3 8B model as a NIM, which can run locally on RTX systems, brings state-of-the-art language model capabilities to individual developers, enabling local testing and experimentation without the need for cloud resources. With NIM running locally, developers can create sophisticated retrieval-augmented generation (RAG) projects right on their workstations.

Local RAG refers to implementing RAG systems entirely on local hardware, without relying on cloud-based services or external APIs.

Developers can use the Llama 3 8B NIM on workstations with one or more NVIDIA RTX 6000 Ada Generation GPUs or on NVIDIA RTX systems to build end-to-end RAG systems entirely on local hardware. This setup allows developers to tap the full power of Llama 3 8B, ensuring high performance and low latency.

By running the entire RAG pipeline locally, developers can maintain complete control over their data, ensuring privacy and security. This approach is particularly helpful for developers building applications that require real-time responses and high accuracy, such as customer-support chatbots, personalized content-generation tools and interactive virtual assistants.

Hybrid RAG combines local and cloud-based resources to optimize performance and flexibility in AI applications. With NVIDIA AI Workbench, developers can get started with the hybrid-RAG Workbench Project — an example application that can be used to run vector databases and embedding models locally while performing inference using NIM in the cloud or data center, offering a flexible approach to resource allocation.

This hybrid setup allows developers to balance the computational load between local and cloud resources, optimizing performance and cost. For example, the vector database and embedding models can be hosted on local workstations to ensure fast data retrieval and processing, while the more computationally intensive inference tasks can be offloaded to powerful cloud-based NIM inference microservices. This flexibility enables developers to scale their applications seamlessly, accommodating varying workloads and ensuring consistent performance.

NVIDIA ACE NIM inference microservices bring digital humans, AI non-playable characters (NPCs) and interactive avatars for customer service to life with generative AI, running on RTX PCs and workstations.

ACE NIM inference microservices for speech — including Riva automatic speech recognition, text-to-speech and neural machine translation — allow accurate transcription, translation and realistic voices.

The NVIDIA Nemotron small language model is a NIM for intelligence that includes INT4 quantization for minimal memory usage and supports roleplay and RAG use cases.

And ACE NIM inference microservices for appearance include Audio2Face and Omniverse RTX for lifelike animation with ultrarealistic visuals. These provide more immersive and engaging gaming characters, as well as more satisfying experiences for users interacting with virtual customer-service agents.

Dive Into NIM

As AI progresses, the ability to rapidly deploy and scale its capabilities will become increasingly crucial.

NVIDIA NIM microservices provide the foundation for this new era of AI application development, enabling breakthrough innovations. Whether building the next generation of AI-powered games, developing advanced natural language processing applications or creating intelligent automation systems, users can access these powerful development tools at their fingertips.

Ways to get started:

  • Experience and interact with NVIDIA NIM microservices on ai.nvidia.com.
  • Join the NVIDIA Developer Program and get free access to NIM for testing and prototyping AI-powered applications.
  • Buy an NVIDIA AI Enterprise license with a free 90-day evaluation period for production deployment and use NVIDIA NIM to self-host AI models in the cloud or in data centers.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Widescreen Wonder: Las Vegas Sphere Delivers Dazzling Displays

Widescreen Wonder: Las Vegas Sphere Delivers Dazzling Displays

Sphere, a new kind of entertainment medium in Las Vegas, is joining the ranks of legendary circular performance spaces such as the Roman Colosseum and Shakespeare’s Globe Theater — captivating audiences with eye-popping LED displays that cover nearly 750,000 square feet inside and outside the venue.

Behind the screens, around 150 NVIDIA RTX A6000 GPUs help power stunning visuals on floor-to-ceiling, 16x16K displays across the Sphere’s interior, as well as 1.2 million programmable LED pucks on the venue’s exterior — the Exosphere, which is the world’s largest LED screen.

Delivering robust network connectivity, NVIDIA BlueField DPUs and NVIDIA ConnectX-6 Dx NICs — along with the NVIDIA DOCA Firefly Service and NVIDIA Rivermax software for media streaming — ensure that all the display panels act as one synchronized canvas.

“Sphere is captivating audiences not only in Las Vegas, but also around the world on social media, with immersive LED content delivered at a scale and clarity that has never been done before,” said Alex Luthwaite, senior vice president of show systems technology at Sphere Entertainment. “This would not be possible without the expertise and innovation of companies such as NVIDIA that are critical to helping power our vision, working closely with our team to redefine what is possible with cutting-edge display technology.”

Named one of TIME’s Best Inventions of 2023, Sphere hosts original Sphere Experiences, concerts and residencies from the world’s biggest artists, and premier marquee and corporate events.

Rock band U2 opened Sphere with a 40-show run that concluded in March. Other shows include The Sphere Experience featuring Darren Aronofsky’s Postcard From Earth, a specially created multisensory cinematic experience that showcases all of the venue’s immersive technologies, including high-resolution visuals, advanced concert-grade sound, haptic seats and atmospheric effects such as wind and scents.

image of the Earth from space displayed in Sphere
“Postcard From Earth” is a multisensory immersive experience. Image courtesy of Sphere Entertainment.

Behind the Screens: Visual Technology Fueling the Sphere

Sphere Studios creates video content in its Burbank, Calif., facility, then transfers it digitally to Sphere in Las Vegas. The content is then streamed in real time to rack-mounted workstations equipped with NVIDIA RTX A6000 GPUs, achieving unprecedented performance capable of delivering three layers of 16K resolution at 60 frames per second.

The NVIDIA Rivermax software helps provide media streaming acceleration, enabling direct data transfers to and from the GPU. Combined, the software and hardware acceleration eliminates jitter and optimizes latency.

NVIDIA BlueField DPUs also facilitate precision timing through the DOCA Firefly Service, which is used to synchronize clocks in a network with sub-microsecond accuracy.

“The integration of NVIDIA RTX GPUs, BlueField DPUs and Rivermax software creates a powerful trifecta of advantages for modern accelerated comp

uting, supporting the unique high-resolution video streams and strict timing requirements needed at Sphere and setting a new standard for media processing capabilities,” said Nir Nitzani, senior product director for networking software at NVIDIA. “This collaboration results in remarkable performance gains, culminating in the extraordinary experiences guests have at Sphere.” 

Well-Rounded: From Simulation to Sphere Stage

To create new immersive content exclusively for Sphere, Sphere Entertainment launched Sphere Studios, which is dedicated to developing the next generation of original immersive entertainment. The Burbank campus consists of numerous development facilities, including a quarter-sized version of Sphere screen in Las Vegas, dubbed Big Dome, which serves as a specialized screening, production facility and lab for content.

dome-shaped building flanked by palm trees
The Big Dome is 100 feet high and 28,000 square feet. Image courtesy of Sphere Entertainment.

Sphere Studios also developed the Big Sky camera system, which captures uncompressed, 18K images from a single camera, so that the studio can film content for Sphere without needing to stitch multiple camera feeds together. The studio’s custom image processing software runs on Lenovo servers powered by NVIDIA A40 GPUs.

The A40 GPUs also fuel creative work, including 3D video, virtualization and ray tracing. To develop visuals for different kinds of shows, the team works with apps including Unreal Engine, Unity, Touch Designer and Notch.

For more, explore upcoming sessions in NVIDIA’s room at SIGGRAPH and watch the panel discussion “Immersion in Sphere: Redefining Live Entertainment Experiences” on NVIDIA On-Demand.

All images courtesy of Sphere Entertainment.

Read More

In It for the Long Haul: Waabi Pioneers Generative AI to Unleash Fully Driverless Autonomous Trucking

In It for the Long Haul: Waabi Pioneers Generative AI to Unleash Fully Driverless Autonomous Trucking

Artificial intelligence is transforming the transportation industry, helping drive advances in autonomous vehicle (AV) technology.

Waabi, a Toronto-based startup, is embracing generative AI to deliver self-driving vehicles at scale — starting with the long-haul trucking sector.

At GTC in March, Waabi announced that it will use the NVIDIA DRIVE Thor centralized car computer to bring a safe, generative AI-powered autonomous trucking solution — the Waabi Driver —  to market.

As the company plans the launch of fully driverless operations next year, Waabi is reinvigorating the industry with a self-driving solution that’s capital-efficient, can safely handle new scenarios on the road and ultimately scales commercially.

Waabi is developing on NVIDIA DRIVE OS, the company’s operating system for safe, AI-defined autonomous vehicles.

The innovative startup has pioneered an approach that centers on the combination of two generative AI systems: a “teacher,” called Waabi World, an advanced simulator that trains and validates a “student,” called Waabi Driver, a single, end-to-end AI system that’s capable of human-like reasoning and is fully interpretable.

When paired together, these systems reduce the need for extensive on-road testing and enable a safer, more efficient solution that is highly performant and scalable.

“We are excited to have a deep collaboration with NVIDIA to bring generative AI to the edge, on our vehicles, at scale,” said Raquel Urtasun, founder and CEO of Waabi.

Generative AI accelerates the development of AVs by “providing an end-to-end system where, instead of requiring hundreds of engineers to develop a system by hand, it provides the ability to learn foundation models that can run unsupervised by observing and acting on the world,” Urtasun added.

Waabi’s collaboration with NVIDIA is one in a series of milestones, including the company’s $200 million Series B round with participation from NVIDIA, its work with logistics company Uber Freight, the launch of its first commercial autonomous trucking routes in the U.S., and the opening of a trucking terminal near Dallas to serve as the center of the company’s operations in the Lone Star state.

“What we’re building for autonomous vehicles — combining generative AI-powered simulation with a foundation AI model purpose-built for acting in the physical world — will enable faster, safer and more scalable deployment of this transformative technology around the world,” Urtasun noted on the company’s website.

Listen to Urtasun’s talk at GTC for more on the company’s work on using generative AI to develop autonomous vehicles.

Read More

GeForce NOW Beats the Heat With 22 New Games in July

GeForce NOW Beats the Heat With 22 New Games in July

GeForce NOW is bringing 22 new games to members this month.

Dive into the four titles available to stream on the cloud gaming service this week to stay cool and entertained throughout the summer — whether poolside, on a long road trip or in the air-conditioned comfort of home.

Plus, get great games at great deals to stream across devices during the Steam Summer Sale. In total, more than 850 titles on GeForce NOW can be found at discounts in a dedicated Steam Summer Sale row on the GeForce NOW app, from now until July 11.

Time to Grind

The First Descendant on GeForce NOW
Be the first Descendant with the cloud.

In The First Descendant from NEXON, take on the role of Descendants tasked with safeguarding the powerful Iron Heart from relentless Vulgus invaders. Set in a captivating sci-fi universe, the game is a third-person co-op action role-playing shooter that seamlessly blends looting mechanics with strategic combat. Engage in intense gunplay, face off against formidable bosses and collect valuable loot while fighting to preserve humanity’s future.

Check out the list of new games this week:

And members can look for the following later this month:

  • Once Human (New release on Steam, July 9)
  • Anger Foot (New release on Steam, July 11)
  • The Crust (New release on Steam, July 15)
  • Gestalt: Steam & Cinder (New release on Steam, July 16)
  • Flintlock: The Siege of Dawn  (New release Steam and Xbox, available on PC Game Pass, July 18)
  • Dungeons of Hinterberg (New release Steam and Xbox, available on PC Game Pass, July 18)
  • Norland (New release on Steam, July 18)
  • Cataclismo (New release on Steam, July 22
  • CONSCRIPT (New release on Steam, July 23)
  • F1 Manager 2024 (New release on Steam, July 23)
  • EARTH DEFENSE FORCE 6 (New release on Steam, July 25)
  • Stormgate Early Access (New release on Steam, July 30)
  • Cyber Knights: Flashpoint (Steam)
  • Content Warning (Steam)
  • Crime Boss: Rockay City (Steam)
  • Gang Beasts (Steam and Xbox, available on PC Game Pass)
  • HAWKED (Steam)
  • Kingdoms and Castles (Steam)

Jam-Packed June

In addition to the 17 games announced last month, 10 more joined the GeForce NOW library:

  • Killer Klowns from Outer Space: The Game (New release on Steam, June 4)
  • Sneak Out (New release on Steam, June 6)
  • Beyond Good & Evil – 20th Anniversary Edition (New release on Steam and Ubisoft, June 24)
  • As Dusk Falls (Steam and Xbox, available on PC Game Pass)
  • Bodycam (Steam)
  • Drug Dealer Simulator 2 (Steam)
  • Sea of Thieves (Steam and Xbox, available on PC Game Pass)
  • Skye: The Misty Isle (New release on Steam, June 19)
  • XDefiant (Ubisoft)
  • Tell Me Why (Steam and Xbox, available on PC Game Pass)

Torque Drift 2 didn’t make it in June due to technical issues. Stay tuned to GFN Thursday for updates.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Decoding How the Generative AI Revolution BeGAN

Decoding How the Generative AI Revolution BeGAN

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

Generative models have completely transformed the AI landscape — headlined by popular apps such as ChatGPT and Stable Diffusion.

Paving the way for this boom were foundational AI models and generative adversarial networks (GANs), which sparked a leap in productivity and creativity.

NVIDIA’s GauGAN, which powers the NVIDIA Canvas app, is one such model that uses AI to transform rough sketches into photorealistic artwork.

How It All BeGAN

GANs are deep learning models that involve two complementary neural networks: a generator and a discriminator.

These neural networks compete against each other. The generator attempts to create realistic, lifelike imagery, while the discriminator tries to tell the difference between what’s real and what’s generated. As its neural networks keep challenging each other, GANs get better and better at making realistic-looking samples.

GANs excel at understanding complex data patterns and creating high-quality results. They’re used in applications including image synthesis, style transfer, data augmentation and image-to-image translation.

NVIDIA’s GauGAN, named after post-Impressionist painter Paul Gauguin, is an AI demo for photorealistic image generation. Built by NVIDIA Research, it directly led to the development of the NVIDIA Canvas app — and can be experienced for free through the NVIDIA AI Playground.

GauGAN has been wildly popular since it debuted at NVIDIA GTC in 2019 — used by art teachers, creative agencies, museums and millions more online.

Giving Sketch to Scenery a Gogh

Powered by GauGAN and local NVIDIA RTX GPUs, NVIDIA Canvas uses AI to turn simple brushstrokes into realistic landscapes, displaying results in real time.

Users can start by sketching simple lines and shapes with a palette of real-world elements like grass or clouds —- referred to in the app as “materials.”

The AI model then generates the enhanced image on the other half of the screen in real time. For example, a few triangular shapes sketched using the “mountain” material will appear as a stunning, photorealistic range. Or users can select the “cloud” material and with a few mouse clicks transform environments from sunny to overcast.

The creative possibilities are endless — sketch a pond, and other elements in the image, like trees and rocks, will reflect in the water. Change the material from snow to grass, and the scene shifts from a cozy winter setting to a tropical paradise.

Canvas offers nine different styles, each with 10 variations and 20 materials to play with.

Canvas features a Panorama mode that enables artists to create 360-degree images for use in 3D apps. YouTuber Greenskull AI demonstrated Panorama mode by painting an ocean cove, before then importing it into Unreal Engine 5.

Download the NVIDIA Canvas app to get started.

Consider exploring NVIDIA Broadcast, another AI-powered content creation app that transforms any room into a home studio. Broadcast is free for RTX GPU owners.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

How an NVIDIA Engineer Unplugs to Recharge During Free Days

How an NVIDIA Engineer Unplugs to Recharge During Free Days

On a weekday afternoon, Ashwini Ashtankar sat on the bank of the Doodhpathri River, in a valley nestled in the Himalayas. Taking a deep breath, she noticed that there was no city noise, no pollution — and no work emails.

Ashtankar, a senior tools development engineer in NVIDIA’s Pune, India, office, took advantage of the company’s free days — two extra days off per quarter when the whole company disconnects from work — to recharge. Free days are fully paid by NVIDIA, not counted as vacation or as personal time off, and are in addition to country-specific holidays and time-away programs.

Free days give employees time to take an adventure, a breather — or both. Ashtankar and her husband, Dipen Sisodia — also an NVIDIAN — spent it outdoors, hiking up a mountain, playing in snow and exploring forests and lush green meadows.

“My free days give me time to focus on myself and recharge,” said Ashtankar. “We didn’t take our laptops. We were able to completely disconnect, like all NVIDIANs were doing at the same time.”

Ashtankar returned to work feeling refreshed and recharged, she said. Her team tests software features of NVIDIA products, focusing on GPU display drivers and the GeForce NOW game-streaming service, to make sure bugs are found and addressed before a product reaches customers.

“I take pride in tackling challenges with the highest level of quality and creativity, all in support of delivering the best products to our customers,” she said. “To do that, sometimes the most productive thing we can do is rest and let the soul catch up with the body.”

Ashtankar plans to build her career at NVIDIA for many years to come.

“I’ve never heard of another company that truly cares this much about its employees,” she said.

Learn more about NVIDIA life, culture and careers.

Read More

GeForce NOW Unleashes High-Stakes Horror With ‘Resident Evil Village’

GeForce NOW Unleashes High-Stakes Horror With ‘Resident Evil Village’

Get ready to feel some chills, even amid the summer heat. Capcom’s award-winning Resident Evil Village brings a touch of horror to the cloud this GFN Thursday, part of three new games joining GeForce NOW this week.

And a new app update brings a visual enhancement to members, along with new ways to curate their GeForce NOW gaming libraries.

Greetings on GFN
#GreetingsFromGFN by @railbeam.

Members are showcasing their favorite locations to visit in the cloud. Follow along with #GreetingsFromGFN on @NVIDIAGFN social media accounts and share picturesque scenes from the cloud for a chance to be featured.

The Bell Tolls for All

Resident Evil Village on GeForce NOW
The cloud — big enough, even, for Lady Dimitrescu and her towering castle.

Resident Evil Village, the follow-up to Capcom’s critically acclaimed Resident Evil 7 Biohazard, delivers a gripping blend of survival-horror and action. Step into the shoes of Ethan Winters, a desperate father determined to rescue his kidnapped daughter.

Set against a backdrop of a chilling European village teeming with mutant creatures, the game includes a captivating cast of characters, including the enigmatic Lady Dimitrescu, who haunts the dimly lit halls of her grand castle. Fend off hordes of enemies, such as lycanthropic villagers and grotesque abominations.

Experience classic survival-horror tactics — such as resource management and exploration — mixed with action featuring intense combat and higher enemy counts.

Ultimate and Priority members can experience the horrors of this dark and twisted world in gruesome, mesmerizing detail with support for ray tracing and high dynamic range (HDR) for the most lifelike shadows and sharp visual fidelity when navigating every eerie hallway. Members can stream it all seamlessly from NVIDIA GeForce RTX-powered servers in the cloud and get a taste of the chills with the Resident Evil Village demo before taking on the towering Lady Dimitrescu in the full game.

I Can See Clearly Now

The latest GeForce NOW app update — version 2.0.64 — adds support for 10-bit color precision. Available for Ultimate members, this feature enhances image quality when streaming on Windows, macOS and NVIDIA SHIELD TV.

SDR10 on GeForce NOW
Rolling out now.

10-bit color precision significantly improves the accuracy and richness of color gradients during streaming. Members will especially notice its effects in scenes with detailed color transitions, such as for vibrant skies, dimly lit interiors, and various loading screens and menus. It’s useful for non-HDR displays and non-HDR-supported games. Find the setting in the GeForce NOW app > Streaming Quality > Color Precision, with the recommended default value of 10-bit.

Try it out on the neon-lit streets of Cyberpunk 2077 for smoother color transitions, and traverse the diverse landscapes of Assassin’s Creed Valhalla and other games for a more immersive streaming experience.

The update, rolling out now, also brings bug fixes and new ways to curate a member’s in-app game library. For more information, visit the NVIDIA Knowledgebase.

Lights, Camera, Action: New Games

Beyond Good and Evil 20th Anniversary Edition on GeForce NOW
Uncover the truth.

Join the rebellion as action reporter Jade in Beyond Good & Evil – 20th Anniversary Edition from Ubisoft. Embark on this epic adventure in up to 4K 60 frames per second with improved graphics and audio, a new speedrun mode, updated achievements and an exclusive anniversary gallery. Enjoy unique new rewards exploring Hillys and discover more about Jade’s past in a new treasure hunt throughout the planet.

Check out the list of new games this week:

  • Beyond Good & Evil – 20th Anniversary Edition (New release on Steam and Ubisoft, June 24)
  • Drug Dealer Simulator 2 (Steam)
  • Resident Evil Village (Steam)
  • Resident Evil Village Demo (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD

Into the Omniverse: SyncTwin Helps Democratize Industrial Digital Twins With Generative AI, OpenUSD

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly  — developed a digital twin app using NVIDIA cuOpt, an accelerated optimization engine for solving complex routing problems, and NVIDIA Omniverse, a platform of application programming interfaces, software development kits and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help its customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data — including floorplans from Microsoft PowerPoint and warehouse container data from Excel spreadsheets — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11 a.m. CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

Editor’s note: This post is part of Into the Omniverse, a series focused on how technical artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Efficiency and sustainability are critical for organizations looking to be at the forefront of industrial innovation.

To address the digitalization needs of manufacturing and other industries, SyncTwin GmbH — a company that builds software to optimize production, intralogistics and assembly  — developed a digital twin app using the NVIDIA cuOpt accelerated optimization engine for solving complex routing problems and NVIDIA Omniverse, a platform of application programming interfaces (APIs), software development kits (SDKs) and services that enable developers to build OpenUSD-based applications.

SyncTwin is harnessing the power of the extensible OpenUSD framework for describing, composing, simulating, and collaborating within 3D worlds to help their customers create physically accurate digital twins of their factories. The digital twins can be used to optimize production and enhance digital precision to meet industrial performance.

OpenUSD’s Role in Modern Manufacturing

Manufacturing workflows are incredibly complex, making effective communication and integration across various domains pivotal to ensuring operational efficiency. The SyncTwin app provides seamless collaboration capabilities for factory plant managers and their teams, enabling them to optimize processes and resources.

The app uses OpenUSD and Omniverse to help make factory planning and operations easier and more accessible by integrating various manufacturing aspects into a cohesive digital twin. Customers can integrate visual data, production details, product catalogs, orders, schedules, resources and production settings all in one place with OpenUSD.

The SyncTwin app creates realistic, virtual environments that facilitate seamless interactions between different sectors of factory operations. This capability enables diverse data —  including floorplans from Microsoft PowerPoint and warehouse container data from an Excel spreadsheet — to be aggregated in a unified digital twin.

The flexibility of OpenUSD allows for non-destructive editing and composition of complex 3D assets and animations, further enhancing the digital twin.

“OpenUSD is the common language bringing all these different factory domains into a single digital twin,” said Michael Wagner, cofounder and chief technology officer of SyncTwin. “The framework can be instrumental in dismantling data silos and enhancing collaborative efficiency across different factory domains, such as assembly, logistics and infrastructure planning.”

Hear Wagner discuss turning PowerPoint and Excel data into digital twin scenarios using the SyncTwin App in a LinkedIn livestream on July 4 at 11am CET.

Pioneering Generative AI in Factory Planning

By integrating generative AI into its platform, SyncTwin also provides users with data-driven insights and recommendations, enhancing decision-making processes.

This AI integration automates complex analyses, accelerates operations and reduces the need for manual inputs. Learn more about how SyncTwin and other startups are combining the powers of OpenUSD and generative AI to elevate their technologies in this NVIDIA GTC session.

Hear SyncTwin and NVIDIA experts discuss how digital twins are unlocking new possibilities in this recent community livestream:

By tapping into the power of OpenUSD and NVIDIA’s AI and optimization technologies, SyncTwin is helping set new standards for factory planning and operations, improving operational efficiency and supporting the vision of sustainability and cost reduction across manufacturing.

Get Plugged Into the World of OpenUSD

Learn more about OpenUSD and meet with NVIDIA experts at SIGGRAPH, taking place July 28-Aug. 1 at the Colorado Convention Center and online. Attend these SIGGRAPH highlights:

  • NVIDIA founder and CEO Jensen Huang’s fireside chat on Monday, July 29, covering the latest in generative AI and accelerated computing.
  • OpenUSD Day on Tuesday, July 30, where industry luminaries and developers will showcase how to build 3D pipelines and tools using OpenUSD.
  • Hands-on OpenUSD training for all skill levels.

Check out this video series about how OpenUSD can improve 3D workflows. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, LinkedIn, Medium and X. For more, join the Omniverse community on the forums, Discord server and YouTube channel. 

Featured image courtesy of SyncTwin GmbH.

Read More