Cardiac Clarity: Dr. Keith Channon Talks Revolutionizing Heart Health With AI

Here’s some news to still beating hearts: AI is helping bring some clarity to cardiology. Caristo Diagnostics has developed an AI-powered solution for detecting coronary inflammation in cardiac CT scans. In this episode of NVIDIA’s AI Podcast, Dr. Keith Channon, the Field Marshal Earl Alexander Professor at the University of Oxford, and the cofounder and chief medical officer at the startup, speaks with host Noah Kravtiz about the technology. Called Caristo, it analyzes radiometric features in CT scan data to identify inflammation in the fat tissue surrounding coronary arteries, a key indicator of heart disease. Tune in to learn more about how Caristo uses AI to improve treatment plans and risk predictions by providing physicians with a patient-specific readout of inflammation levels.

Show Notes

1:56: What is ‌Caristo and how does it work?
7:11 The key signal of a heart attack
10:34 How did Channon come up with the idea of using AI to drive breakthroughs?
22:40 How much has the CT scan changed over the years?
26:01: What’s ahead for Caristo?
30:14: How to take care of your own heart health

You Might Also Like

Immunai Uses Deep Learning to Develop New Drugs – Ep. 176
What if we could map our immune system to create drugs that can help our bodies win the fight against cancer and other diseases? That’s the big idea behind immunotherapy. The problem: the immune system is incredibly complex. Enter Immunai, a biotechnology company using AI technology to map the human immune system and speed the development of new immunotherapies against cancer and autoimmune diseases.

Overjet on Bringing AI to Dentistry – Ep. 179
Dentists get a bad rap. Dentists also get more people out of more aggravating pain than just about anyone, which is why the more technology dentists have, the better. Overjet, a member of the NVIDIA Inception program for startups, is moving fast to bring AI to dentists’ offices.

Democratizing Drug Discovery With Deep Learning – Ep. 172
It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor. But, professors Artem Cherkasov and Olexandr Isayev were surprised that no recent academic papers provided a comprehensive, global research review of how deep learning and GPU-accelerated computing impact drug discovery.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Singtel, NVIDIA to Bring Sovereign AI to Southeast Asia

Singtel, NVIDIA to Bring Sovereign AI to Southeast Asia

Asia’s lion city is roaring ahead in AI.

Singtel, a leading communications services provider based in Singapore, will bring the NVIDIA AI platform to businesses in the island nation and beyond.

The mobile and broadband company is building energy-efficient data centers across Southeast Asia accelerated with NVIDIA Hopper architecture GPUs and using NVIDIA AI reference architectures proven to deliver optimal performance.

The data centers will serve as sovereign national resources — AI factories that process the private datasets of companies, startups, universities and governments safely on shore to produce valuable insights.

Singtel’s first AI services will spin up in Singapore, with future data centers under construction in Indonesia and Thailand. From its hub in Singapore, the company has operations that stretch from Australia to India.

Trusted Engines of AI

The new data centers will act as trusted engines of generative AI. The most transformative technology of our time, generative AI and its ability to amplify human intelligence and productivity are attracting users worldwide.

Nations are creating large language models tuned to their local dialects, cultures and practices. Singtel sits at the center of such opportunities among Southeast Asia’s vibrant Chinese, Indian, Malay and other communities.

Singtel’s initiative supports Singapore’s national AI strategy to empower its citizens with the latest technology. The plan calls for significantly expanding the country’s compute infrastructure as well as its talent pool of machine learning specialists.

For businesses in the region, having a known, local provider of these computationally intensive services provides a safe, easy on-ramp to generative AI. They can enhance and personalize their products and services while protecting sensitive corporate data.

Taking the Green Path

Singtel is committed to democratizing AI and decarbonizing its operations.

Its latest data centers are being built with an eye to sustainability, including in the selection of materials and use of liquid cooling. They adopt best practices to deliver less than 1.3 in PUE, the power usage effectiveness metric for data center efficiency.

Singtel will use its Paragon software platform to orchestrate how the new AI applications work in concert with its mobile and broadband services. The combination will enable edge computing services like powering robots and other autonomous systems from AI models running in the cloud.

A Full-Stack Foundation

The company will offer its customers NVIDIA AI Enterprise, a software platform for building and deploying AI applications, including generative AI. Singtel will also be an NVIDIA Cloud Partner, delivering optimized AI services on the NVIDIA platform.

Because Singtel’s data centers use NVIDIA’s proven reference architectures for AI computing, users can employ its services, knowing they’re optimized for leading AI performance.

Singtel already has hands-on experience delivering edge services with NVIDIA AI.

Last May, it demonstrated a digital avatar created with the NVIDIA Omniverse and NVIDIA NeMo platforms that users could interact with over its 5G network. And in 2021, Singtel delivered GPU services as part of a testbed for local government agencies.

New AI Role for Telcos

Singapore’s service provider joins pioneers in France, India, Italy and Switzerland deploying AI factories that deliver generative AI services with data sovereignty.

To learn more about how Singtel and other telcos are embracing generative AI, register for a session on the topic at NVIDIA GTC. The global AI conference runs March 18-21, starting with a keynote by NVIDIA founder and CEO Jensen Huang.

Read More

Behold the ‘Magic Valley’: Brandon Tieh’s Stunning Scene Showcases Peak Creativity, Powered by RTX and AI

Behold the ‘Magic Valley’: Brandon Tieh’s Stunning Scene Showcases Peak Creativity, Powered by RTX and AI

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

This week’s featured In the NVIDIA Studio 3D artist Brandon Tieh puts his artistic talents on full display with his whimsical scene Magic Valley.

An array of colors — from bright crimson to hushed blues and lush greens — help set the mood of the vivid scene, which took inspiration from Tieh’s love for video games, anime and manga.

His diverse pieces — along with the works of fellow Studio artists like Christian Dimitrov, Vera Dementchouk and Eddie Mendoza — take audiences to fantastical getaways in the latest Studio Standouts video.

To fuel creative work, the new GeForce RTX 4080 SUPER is available starting tomorrow in a limited Founders Edition design and as custom boards from partners, starting at $999. It’s equipped with more cores than the GeForce RTX 4080 and includes the world’s fastest GDDR6X video memory at 23 Gbps. In 3D apps like Blender, it can run up to 70% faster than previous generations. The video editing app Blackmagic Design’s DaVinci Resolve accelerates AI effects over 30% faster than the GeForce RTX 3080 Ti.

Get creative and AI superpowers with the GeForce RTX 4080 SUPER.

The RTX 4080 SUPER also brings great frame rates and stunning 4K resolution in fully ray-traced games, including Alan Wake 2, Cyberpunk 2077: Phantom Liberty and Portal with RTX. Discover what RTX 40 SUPER Series graphics cards and systems are available.

Stepping Into Magical Worlds

Tieh’s scene began as a sketch of what he envisioned to be a giant door in a grassy field.

“A very broad and abstract thought — but that’s sort of the point of fantasy,” he explained.

Tieh specializes in building impressive worlds.

To bring it to life, he began by gathering assets such as rocks and grass from Quixel Megascans, the Unreal Engine Marketplace and ArtStation Marketplace.

The door required extra customization, so he modeled one from scratch, first sculpting it in ZBrush before importing it into Adobe 3D Substance Painter for a quick texture pass. Tieh’s GeForce RTX graphics card used RTX-accelerated light and ambient occlusion to bake assets in mere seconds.

Lighting will heavily influence the final look, so only a basic texture pass was needed in Substance 3D Painter.

Next, Tieh tackled modeling the obelisks and pillars in Blender, where RTX-accelerated OptiX ray tracing in the viewport ensured highly interactive, photorealistic rendering.

Modeling and UV layouts of pillars and obelisks.

He then unwrapped his 3D assets onto a 2D plane, where he applied textures to the model’s surfaces to enhance realism — a key process called UV unwrapping.

Tieh’s customized door in Unreal Engine.

With the textured assets in place, Tieh next built the scene in Unreal Engine. His technique involves focusing on the big shapes by looking at the scene in a smaller thumbnail view, then flipping the canvas to refresh his perspective — very similar to the approach concept artists use. He adjusted lighting by deploying the same technique.

“Magic Valley” adjusting atmospheric light in the scene.

“I’ve used NVIDIA GPUs all my life — they’re super reliable and high performing without any issues.”  — Brandon Tieh

Unreal Engine users can tap NVIDIA DLSS Super Resolution to increase the interactivity of the viewport by using AI to upscale frames rendered at lower resolution, and enhance image quality using DLSS Ray Reconstruction.

Fog is another major component of the scene. “Fog is generally darker and more opaque in the background and becomes lighter and more translucent as it approaches the foreground,” said Tieh. He primarily used fog cards in Unreal Engine’s free Blueprints Visual Scripting system to add a paint-like effect.

The majority of lighting was artificial, meaning Tieh had to use a significant number of individually placed light sources, but “it looks very believable if executed well,” he explained.

The menu on the right lists the roughly 100 light actor sources Tieh used.

From there, Tieh exported final renders with ease and speed thanks to his RTX GPU.

 

“There’s no secret tricks or fancy engine features,” said Tieh. “It’s all about sticking to the basics and fundamentals of art, as well as trusting your own artistic eye.”

3D artist Brandon Tieh.

Check out Tieh’s impressive portfolio on ArtStation.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Boston Children’s Researchers, in Joint Effort, Deploy AI Across Their Hip Clinic to Support Patients, Doctors

Boston Children’s Researchers, in Joint Effort, Deploy AI Across Their Hip Clinic to Support Patients, Doctors

Hip disorders, comprising some of the world’s most common joint diseases, are especially prevalent among adolescents and young adults, causing stiffness, pain or a limp. But they can be hard to diagnose using solely 2D medical imaging.

Helping to treat these disorders, the Boston Children’s Hospital’s (BCH’s) Adolescent and Young Adult Hip Preservation Program is the U.S.’s first to deploy a fully automated AI tool across its clinic.

Called VirtualHip, the tool can create a 3D model of a hip from routine medical images, assess anatomy and movement-related issues, and provide clinicians with diagnostic and treatment guidance. It was built at an orthopedic research lab at BCH, Harvard Medical School’s primary pediatric hospital, using the NVIDIA DGX platform.

A team of 10 researchers are working on the project, funded in part by an NVIDIA Academic Hardware Grant, including engineers, computer scientists, orthopedic surgeons, radiologists and software developers.

“Using AI, clinicians can get more value out of the clinical data they routinely collect,” said Dr. Ata Kiapour, the lab’s principal investigator, director of the musculoskeletal informatics group at BCH and assistant professor of orthopedic surgery at Harvard Medical School. “This tool can augment their performance to make better choices in diagnosis and treatment, and free their time to focus on giving patients the best care.”

Getting Straight to the Joint

Often, clinicians must determine a treatment plan — with levels of intervention ranging from physical therapy to total hip replacement — based on just a 2D image, such as an X-ray, CT scan or MRI. Automatically creating 3D models from these images, and using them to conduct comprehensive joint assessments, can help improve the accuracy of diagnosis to inform treatment planning.

“I started a postdoc with an orthopedic surgeon at BCH in 2013, when I saw how an engineer or scientist could help with patient treatment,” said Dr. Kiapour, who’s also trained as a biomedical engineer. “Over the years, I saw that hospitals have a ton of data, but efficient data processing for clinical use was a huge, unmet need.”

VirtualHip, fully integrated with BCH’s hip clinic and radiology database, helps to fill this gap.

Clinicians can log in to the software tool through a web-based portal, view and interact with 3D simulations of 2D medical images, submit analysis requests and see results within an hour — 4x quicker on average than receiving a radiology report after imaging.

The tool, which produces 3D models with a margin of error less than one millimeter, can assess morphological abnormalities and identify issues related to hip motion, such as pain from hip bones rubbing against each other.

VirtualHip was developed using a database with tens of millions of clinical notes and imaging data from patients seen at BCH over the past two decades. Using natural language processing models and computer vision algorithms, Dr. Kiapour’s team processed this data, training the tool to analyze normal versus pathologic hip development and identify factors influencing the injury risk and treatment outcomes.

This will enable VirtualHip to offer patient-specific risk assessment and treatment planning by comparing current patients to previously treated ones with similar demographic backgrounds.

“The hardware support that we got from NVIDIA made all of this possible,” Dr. Kiapour said. “DGX enabled advanced computations on more than 20 years’ worth of historical data for our fine-tuned clinical AI model.”

Clearer Explanations, Better Outcomes for Patients

In addition to assisting doctors in diagnosis and treatment planning, VirtualHip helps patients better understand their condition.

“When patients look at an X-ray, it doesn’t look like a real hip — but with a 3D model that can be rotated, the doctor can show exactly where the joints are impinging or unstable,” Dr. Kiapour said. “This lets the patient better understand their disorder, which usually makes them more compliant with whatever the doctor’s prescribing.”

This type of visualization is especially helpful for children and younger adults, Kiapour added.

The VirtualHip project is under continuous development, including work toward a patient-facing platform — using large language models and generative AI — for personalized analyses and treatment recommendations.

The BCH researchers plan to commercialize the product for use in other hospitals.

Subscribe to NVIDIA healthcare news.

Read More

Sharper Image: GeForce NOW Update Delivers Stunning Visuals to Android Devices

Sharper Image: GeForce NOW Update Delivers Stunning Visuals to Android Devices

This GFN Thursday levels up PC gaming on mobile with higher-resolution support on Android devices.

This week also brings 10 new games to the GeForce NOW library, including Enshrouded. 

Pixel Perfect

Android 1440p on GeForce NOW
New Year’s resolutions.

GeForce NOW transforms nearly any device into a high-powered PC gaming rig, and members streaming on Android can now access that power from the palms of their hands. The GeForce NOW Android app, rolling out now to members, unlocks a new level of visual quality for Ultimate members gaming on mobile, with improved support for streaming up to 1440p resolution at 120 frames per second.

Explore the vibrant neon landscapes of Cyberpunk 2077, stream triple-A titles like Baldur’s Gate 3 and Monster Hunter: World, and play the latest releases in the cloud, including Prince of Persia: The Lost Crown and Exoprimal — all on the go with higher resolutions for more immersive gameplay.

Ultimate members can stream these and over 1,800 titles from the GeForce NOW library on select 120Hz Android phones and tablets at pixel-perfect quality. Plus, they can take gameplay even further with eight-hour sessions and tap GeForce RTX 4080-powered servers for faster access to their gaming libraries.

Sign up for an Ultimate membership today and check out this article for more details on how to set up Android devices for PC gaming on the go.

Got Games?

Stargate Timekeepers on GeForce NOW
“We fight because we choose to.”

Lead a team of specialists through a story-driven campaign set in the SG-1 universe of Stargate: Timekeepers from Slitherine Ltd. Sneak characters behind enemy lines, use their unique skills, craft the perfect plan to unravel a time-loop mystery, and defeat the Goa’uld threat. It’s available to stream from the cloud this week.

More titles joining the cloud this week include:

  • Stargate: Timekeepers (New release on Steam, Jan. 23)
  • Enshrouded (New release on Steam, Jan. 24)
  • Firefighting Simulator – The Squad (Steam)
  • Metal: Hellsinger (Xbox, available on the Microsoft Store)
  • Road 96: Mile 0 (Xbox, available on the Microsoft Store)
  • Shadow Tactics: Blades of the Shogun (Steam)
  • Shadow Tactics: Blades of the Shogun – Aiko’s Choice (Steam)
  • Solasta: Crown of the Magister (Steam)
  • Tails Noir (Xbox, available on the Microsoft Store)
  • Wobbly Life (Steam)

Games from Spike Chunsoft will be removed from the GeForce NOW library at the request of the publisher. Fourteen titles are leaving on Friday, Feb. 2, so be sure to catch them before they go:

  • 428 Shibuya Scramble
  • AI: The Somnium Files
  • Conception PLUS: Maidens of the Twelve Stars
  • Danganronpa: Trigger Happy Havoc
  • Danganronpa 2: Goodbye Despair
  • Danganronpa V3: Killing Harmony
  • Danganronpa Another Episode: Ultra Despair Girls
  • Fire Pro Wrestling World
  • Re: ZERO – Starting Life in Another World – The Prophecy of the Throne
  • RESEARCH and DESTROY
  • Shiren the Wanderer: The Tower of Fortune and the Dice of Fate
  • STEINS;GATE
  • Zanki Zero: Last Beginning
  • Zero Escape: The Nonary Games

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

US National Science Foundation Launches National AI Research Resource Pilot

US National Science Foundation Launches National AI Research Resource Pilot

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA.

The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations.

“The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America,” said NSF Director Sethuraman Panchanathan. “By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness.”

NVIDIA’s commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation.

“The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities,” said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF.

“Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR,” Antypas added.

Accelerating Access to AI

“AI is increasingly defining our era, and its potential can best be fulfilled with broad access to its transformative capabilities,” said NVIDIA founder and CEO Jensen Huang.

“Partnerships are really at the core of the NAIRR pilot,” said Tess DeBlanc-Knowles, NSF’s special assistant to the director for artificial intelligence.

“It’s been incredibly impressive to see this breadth of partners come together in these 90 days, bringing together government, industry, nonprofits and philanthropies,” she added. “Our industry and nonprofit partners are bringing critical expertise and resources, which are essential to advance AI and move forward with trustworthy AI initiatives.”

NVIDIA’s collaboration with scientific centers aims to significantly scale up educational and workforce training programs, enhancing AI literacy and skill development across the scientific community.

NVIDIA will harness insights from researchers using its platform, offering an opportunity to refine and enhance the effectiveness of its technology for science, and supporting continuous advancement in AI applications.

“With NVIDIA AI software and supercomputing, the scientists, researchers and engineers of the extended NSF community will be able to utilize the world’s leading infrastructure to fuel a new generation of innovation,” Huang said.

The Foundation for Modern AI

Accelerating both AI research and research done with AI, NVIDIA’s contributions include NVIDIA DGX Cloud AI supercomputing resources and NVIDIA AI Enterprise software.

Offering full-stack accelerated computing from systems to software, NVIDIA AI provides the foundation for generative AI, with significant adoption across research and industries.

Broad Support Across the US Government

As part of this national endeavor, the NAIRR pilot brings together a coalition of government partners, showcasing a unified approach to advancing AI research.

Its partners include the U.S. National Science Foundation, U.S. Department of Agriculture, U.S. Department of Energy, U.S. Department of Veterans Affairs, National Aeronautics and Space Administration, National Institutes of Health, National Institute of Standards and Technology, National Oceanic and Atmospheric Administration, Defense Advanced Research Projects Agency, U.S. Patent and Trade Office and the U.S. Department of Defense.

The NAIRR pilot builds on the United States’ rich history of leading large-scale scientific endeavors, such as the creation of the internet, which, in turn, led to the advancement of AI.

Leading in Advanced AI

NAIRR promises to drive innovations across various sectors, from healthcare to environmental science, positioning the U.S. at the forefront of global AI advancements.

The launch meets a goal outlined in Executive Order 14110, signed by President Biden in October 2023, directing NSF to launch a pilot for the NAIRR within 90 days.

The NAIRR pilot will provide access to advanced computing, datasets, models, software, training and user support to U.S.-based researchers and educators.

“Smaller institutions, rural institutions, institutions serving underrepresented populations are key communities we’re trying to reach with the NAIRR,” said Antypas. “These communities are less likely to have resources to build their own computing or data resources.”

Paving the Way for Future Investments

As the pilot expedites the proof of concept, future investments in the NAIRR will democratize access to AI innovation and support critical work advancing the development of trustworthy AI.

The pilot will initially support AI research to advance safe, secure and trustworthy AI as well as the application of AI to challenges in healthcare and environmental and infrastructure sustainability.

Researchers can apply for initial access to NAIRR pilot resources through the NSF. The NAIRR pilot welcomes additional private-sector and nonprofit partners.

Those interested are encouraged to reach out to NSF at nairr_pilot@nsf.gov.

Read More

High Can See Clearly Now: AI-Powered NVIDIA RTX Video HDR Transforms Standard Video Into Stunning High Dynamic Range

High Can See Clearly Now: AI-Powered NVIDIA RTX Video HDR Transforms Standard Video Into Stunning High Dynamic Range

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

RTX Video HDR — first announced at CES — is now available for download through the January Studio Driver. It uses AI to transform standard dynamic range video playing in internet browsers into stunning high dynamic range (HDR) on HDR10 displays.

PC game modders now have a powerful new set of tools to use with the release of the NVIDIA RTX Remix open beta.

It features full ray tracing, NVIDIA DLSS, NVIDIA Reflex, modern physically based rendering assets and generative AI texture tools so modders can remaster games more efficiently than ever.

Pick up the new GeForce RTX 4070 Ti SUPER available from custom board partners in stock-clocked and factory-overclocked configurations to enhance creating, gaming and AI tasks.

Get creative superpowers with the GeForce RTX 4070 Ti SUPER available now.

Part of the 40 SUPER Series announced at CES, it’s equipped with more CUDA cores than the RTX 4070, a frame buffer increased to 16GB, and a 256-bit bus — perfect for video editing and rendering large 3D scenes. It runs up to 1.6x faster than the RTX 3070 Ti and 2.5x faster with DLSS 3 in the most graphics-intensive games.

And this week’s featured In the NVIDIA Studio technical artist Vishal Ranga shares his vivid 3D scene Disowned — powered by NVIDIA RTX and Unreal Engine with DLSS.

RTX Video HDR Delivers Dazzling Detail

Using the power of Tensor Cores on GeForce RTX GPUs, RTX Video HDR allows gamers and creators to maximize their HDR panel’s ability to display vivid, dynamic colors, preserving intricate details that may be inadvertently lost due to video compression.

RTX Video HDR and RTX Video Super Resolution can be used together to produce the clearest livestreamed video anywhere, anytime. These features work on Chromium-based browsers such as Google Chrome or Microsoft Edge.

To enable RTX Video HDR:

  1. Download and install the January Studio Driver.
  2. Ensure Windows HDR features are enabled by navigating to System > Display > HDR.
  3. Open the NVIDIA Control Panel and navigate to Adjust video image settings > RTX Video Enhancement — then enable HDR.

Standard dynamic range video will then automatically convert to HDR, displaying remarkably improved details and sharpness.

RTX Video HDR is among the RTX-powered apps enhancing everyday PC use, productivity, creating and gaming. NVIDIA Broadcast supercharges mics and cams; NVIDIA Canvas turns simple brushstrokes into realistic landscape images; and NVIDIA Omniverse seamlessly connects 3D apps and creative workflows. Explore exclusive Studio tools, including industry-leading NVIDIA Studio Drivers — free for RTX graphics card owners — which support the latest creative app updates, AI-powered features and more.

RTX Video HDR requires an RTX GPU connected to an HDR10-compatible monitor or TV. For additional information, check out the RTX Video FAQ.

Introducing the Remarkable RTX Remix Open Beta

Built on NVIDIA Omniverse, the RTX Remix open beta is available now.

The NVIDIA RTX open beta is out now.

It allows modders to easily capture game assets, automatically enhance materials with generative AI tools, reimagine assets via Omniverse-connected apps and Universal Scene Description (OpenUSD), and quickly create stunning RTX remasters of classic games with full ray tracing and NVIDIA DLSS technology.

RTX Remix has already delivered stunning remasters, such as Portal with RTX and the modder-made Portal: Prelude RTX. Orbifold Studios is now using the technology to develop Half-Life 2 RTX: An RTX Remix Project, a community remaster of one of the highest-rated games of all time. Check out the gameplay trailer, showcasing Orbifold Studios’ latest updates to Ravenholm:

Learn more about the RTX Remix open beta and sign up to gain access.

Leveling Up With RTX

Vishal Ranga has a decade’s worth of experience in the gaming industry, where he pursues level design.

“I’ve loved playing video games since forever, and that curiosity led me to game design,” he said. “A few years later, I found my sweet spot in technical art.”

Ranga specializes in level design.

His stunning scene Disowned was born out of experimentation with Unreal Engine’s new ray-traced global illumination lighting capabilities.

Remarkably, he skipped the concepting process — the entire project was conceived solely from Ranga’s imagination.

Applying the water shader and mocking up the lighting early helped Ranga set up the mood of the scene. He then updated old assets and searched the Unreal Engine store for new ones — what he couldn’t find, like fishing nets and custom flags, he created from scratch.

Ranga meticulously organizes assets.

“I chose a GeForce RTX GPU to use ray-traced dynamic global illumination with RTX cards for natural, more realistic light bounces.” — Vishal Ranga

Ranga’s GeForce RTX graphics card unlocked RTX-accelerated rendering for high-fidelity, interactive visualization of 3D designs during virtual production.

Next, he tackled shader work, blending in moss and muck into models of wood, nets and flags. He also created a volumetric local fog shader to complement the assets as they pass through the fog, adding greater depth to the scene.

Shaders add extraordinary depth and visual detail.

Ranga then polished everything up. He first used a water shader to add realism to reflections, surface moss and subtle waves, then tinkered with global illumination and reflection effects, along with other post-process settings.

Materials come together to deliver realism and higher visual quality.

Ranga used Unreal Engine’s internal high-resolution screenshot feature and sequencer to capture renders. This was achieved by cranking up screen resolution to 200%, resulting in crisper details.

Throughout, DLSS enhanced Ranga’s creative workflow, allowing for smooth scene movement while maintaining immaculate visual quality.

When finished with adjustments, Ranga exported the final scene in no time thanks to his RTX GPU.

 

Ranga encourages budding artists who are excited by the latest creative advances but wondering where to begin to “practice your skills, prioritize the basics.”

“Take the time to practice and really experience the highs and lows of the creation process,” he said. “And don’t forget to maintain good well-being to maximize your potential.”

3D artist Vishal Ranga.

Check out Ranga’s portfolio on ArtStation.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

NVIDIA DRIVE Partners Showcase Cutting-Edge Innovations in Automated and Autonomous Driving

NVIDIA DRIVE Partners Showcase Cutting-Edge Innovations in Automated and Autonomous Driving

The automotive industry is being transformed by the integration of cutting-edge technologies into software-defined cars.

At CES, NVIDIA invited industry leaders to share their perspectives on how technology, especially AI and computing power, is shaping the future of transportation.

Watch the video to learn more from NVIDIA’s auto partners.

Redefining Possibilities Through Partnership

Magnus Ostberg, chief software officer at Mercedes-Benz, underscores how the company’s partnership with NVIDIA helps push technological boundaries. “[NVIDIA] enables us to go further to bring automated driving to the next level and into areas that we couldn’t go before,” he says.

Computing Power: The Driving Force Behind Autonomy

Shawn Kerrigan, chief operating officer and cofounder at Plus, emphasizes the role of computing power, saying, “Autonomous technology requires immense computing power in order to really understand the world around it and make safe driving decisions.”

“What was impossible to do previously because computing wasn’t strong enough is now doable,” says Eran Ofri, CEO of Imagry. “This is an enabler for the progress of the autonomous driving industry.”

“We wanted a platform that has a track record of being deployed in the automotive industry,” adds Stefan Solyom, chief technology officer at Pebble. “This is what NVIDIA can give us.”

And Martin Kristensson, head of product strategy at Volvo Cars, says, “We partner with NVIDIA to get the best compute that we can. More compute in the car means that we can be more aware of the environment around us and reacting earlier and being even safer.”

The Critical Role of AI

Don Burnette, CEO and founder of Kodiak Robotics, states, “NVIDIA makes best-in-class hardware accelerators, and I think it’s going to play a large role in the AI developments for self-driving going forward.”

“Driving as a routine task is tedious,” adds Tony Han, CEO and cofounder of WeRide. “We want to alleviate people from the burden of driving to give back the time. NVIDIA is the backbone of our AI engine.”

And Thomas Ingenlath, CEO of Polestar, says, “Our Polestar 3 sits on the NVIDIA DRIVE platform. This is, of course, very much based on AI technology — and it’s really fascinating and a completely new era for the car.”

Simulation Is Key

Ziv Binyamini, CEO of Foretellix, highlights the role of simulation in development and verification. “Simulation is crucial for the development of autonomous systems,” he says.

Bruce Baumgartner, vice president of supply chain at Zoox, adds, “We have been leveraging NVIDIA’s technology first and foremost on-vehicle to power the Zoox driver. We also leverage NVIDIA technologies in our cloud infrastructure. In particular, we do a lot of work in our simulator.”

Saving Lives With Autonomy

Austin Russell, CEO and founder of Luminar, highlights the opportunity to save lives by using new technology, saying, “The DRIVE platform has been incredibly helpful to be able to actually enable autonomous driving capabilities as well as enhance safety capabilities on vehicles. To be able to have an opportunity to save as many as 100 million lives and 100 trillion hours of people’s time over the next 100 years — everything that we do at the company rolls up to that.”

“Knowing that [this technology] is in vehicles worldwide and saves lives on the road each and every day — the impact that you deliver as you keep people and family safe is amazingly rewarding,” adds Tal Krzypow, vice president of product and strategy at Cipia.

Technology Helps Solve Major Challenges

Shiv Tasker, global industry vice president at Capgemini, reflects on the role of technology in addressing global challenges, saying, “Our modern world is driven by technology, and yet we face tremendous challenges. Technology is the answer. We have to solve the major issues so that we leave a better place for our children and our grandchildren.”

Learn more about the NVIDIA DRIVE platform and how it’s helping industry leaders redefine transportation.

Read More

How Amazon and NVIDIA Help Sellers Create Better Product Listings With AI

How Amazon and NVIDIA Help Sellers Create Better Product Listings With AI

It’s hard to imagine an industry more competitive — or fast-paced — than online retail.

Sellers need to create attractive and informative product listings that must be engaging, capture attention and generate trust.

Amazon uses optimized containers on Amazon Elastic Compute Cloud (Amazon EC2) with NVIDIA Tensor Core GPUs to power a generative AI tool that finds this balance at the speed of modern retail.

Amazon’s new generative AI capabilities help sellers seamlessly create compelling titles, bullet points, descriptions, and product attributes.

To get started, Amazon identifies listings where content could be improved and leverages generative AI to generate high-quality content automatically. Sellers review the generated content and can provide feedback if they want to or accept the content changes to the Amazon catalog.

Previously, creating detailed product listings required significant time and effort for sellers, but this simplified process gives them more time to focus on other tasks.

The NVIDIA TensorRT-LLM software is available today on GitHub and can be accessed through NVIDIA AI Enterprise, which offers enterprise-grade security, support, and reliability for production AI.

TensorRT-LLM open-source software makes AI inference faster and smarter. It works with large language models, such as Amazon’s models for the above capabilities, which are trained on vast amounts of text.

On NVIDIA H100 Tensor Core GPUs, TensorRT-LLM enables up to an 8x speedup on foundation LLMs such as Llama 1 and 2, Falcon, Mistral, MPT, ChatGLM, Starcoder and more.

It also supports multi-GPU and multi-node inference, in-flight batching, paged attention, and Hopper Transformer Engine with FP8 precision; all of which improves latencies and efficiency for the seller experience.

By using TensorRT-LLM and NVIDIA GPUs, Amazon improved its generative AI tool’s inference efficiency in terms of cost or GPUs needed by 2x, and reduced inference latency by 3x compared with an earlier implementation without TensorRT-LLM.

The efficiency gains make it more environmentally friendly, and the 3x latency improvement makes Amazon Catalog’s generative capabilities more responsive.

The generative AI capabilities can save sellers time and provide richer information with less effort. For example, it can enrich a listing for a wireless mouse with an ergonomic design, long battery life, adjustable cursor settings, and compatibility with various devices. It can also generate product attributes such as color, size, weight, and material. These details can help customers make informed decisions and reduce returns.

With generative AI, Amazon’s sellers can quickly and easily create more engaging listings, while being more energy efficient, making it possible to reach more customers and grow their business faster.

Developers can start with TensorRT-LLM today, with enterprise support available through NVIDIA AI Enterprise.

Read More

Buried Treasure: Startup Mines Clean Energy’s Prospects With Digital Twins

Buried Treasure: Startup Mines Clean Energy’s Prospects With Digital Twins

Mark Swinnerton aims to fight climate change by transforming abandoned mines into storage tanks of renewable energy.

The CEO of startup Green Gravity is prototyping his ambitious vision in a warehouse 60 miles south of Sydney, Australia, and simulating it in NVIDIA Omniverse, a platform for building 3D workflows and applications.

The concept requires some heavy lifting. Solar and wind energy will pull steel blocks weighing as much as 30 cars each up shafts taller than a New York skyscraper, storing potential energy that can turn turbines whenever needed.

A Distributed Energy Network

Swinnerton believes it’s the optimal way to save renewable energy because nearly a million abandoned mine shafts are scattered around the globe, many of them already connected to the grid. And his mechanical system is cheaper and greener than alternatives like massive lithium batteries better suited for electric vehicles.

Mark Swinnerton, CEO Green Gravity
Mark Swinnerton

Officials in Australia, India and the U.S. are interested in the concept, and a state-owned mine operator in Romania is conducting a joint study with Green Gravity.

“We have a tremendous opportunity for repurposing a million mines,” said Swinnerton, who switched gears after a 20-year career at BHP Group, one of the world’s largest mining companies, determined to combat climate change.

A Digital-First Design

A longtime acquaintance saw an opportunity to accelerate Swinnerton’s efforts with a digital twin.

“I was fascinated by the Green Gravity idea and suggested taking a digital-first approach, using data as a differentiator,” said Daniel Keys, an IT expert and executive at xAmplify, a provider of accelerated computing services.

AI-powered simulations could speed the design and deployment of the novel concept, said Keys, who met Swinnerton 25 years earlier at one of their first jobs, flipping burgers at a fast-food stand.

Today, they’ve got a digital prototype cooking on xAmplify’s Scaile computer, based on NVIDIA DGX systems. It’s already accelerating Green Gravity’s proof of concept.

“Thanks to what we inferred with a digital twin, we’ve been able to save 40% of the costs of our physical prototype by shifting from three weights to two and moving them 10 instead of 15 meters vertically,” said Swinnerton.

Use Cases Enabled by Omniverse

It’s the first of many use cases Green Gravity is developing in Omniverse.

Once the prototype is done, the simulation will help scale the design to mines as deep as 7,000 feet, or about six Empire State Buildings stacked on top of each other. Ultimately, the team will build in Omniverse a dashboard to control and monitor sensor-studded facilities without the safety hazards of sending a person into the mine.

Green Gravity’s physical prototype and test lab.
Green Gravity’s physical prototype and test lab.

“We expect to cut tens of millions of dollars off the estimated $100 million for the first site because we can use simulations to lower our risks with banks and insurers,” said Swinnerton. “That’s a real tantalizing opportunity.”

Virtual Visualization Tools

Operators will track facilities remotely using visualization systems equipped with NVIDIA A40 GPUs and can stream their visuals to tablets thanks to the TabletAR extension in the Omniverse Spatial Framework.

xAmplify’s workflow uses a number of software components such as NVIDIA Modulus, a framework for physics-informed machine learning models.

“We also use Omniverse as a core integration fabric that lets us connect a half-dozen third-party tools operators and developers need, like Siemens PLM for sensor management and Autodesk for design,” Keys said.

Omniverse eases the job of integrating third-party applications into one 3D workflow because it’s based on the OpenUSD standard.

Along the way, AI sifts reams of data about the thousands of available mines to select optimal sites, predicting their potential for energy storage. Machine learning will also help optimize designs for each site.

Taken together, it’s a digital pathway Swinnerton believes will lead to commercial operations for Green Gravity within the next couple years.

It’s the latest customer for xAmplify’s Canberra data center serving Australian government agencies, national defense contractors and an expanding set of enterprise users with a full stack of NVIDIA accelerated software.

Learn more about how AI is transforming renewables, including wind farm optimization, solar energy generation and fusion energy.

Read More