US National Science Foundation Launches National AI Research Resource Pilot

US National Science Foundation Launches National AI Research Resource Pilot

In a major stride toward building a shared national research infrastructure, the U.S. National Science Foundation has launched the National Artificial Intelligence Research Resource pilot program with significant support from NVIDIA.

The initiative aims to broaden access to the tools needed to power responsible AI discovery and innovation. It was announced Wednesday in partnership with 10 other federal agencies as well as private-sector, nonprofit and philanthropic organizations.

“The breadth of partners that have come together for this pilot underscores the urgency of developing a National AI Research Resource for the future of AI in America,” said NSF Director Sethuraman Panchanathan. “By investing in AI research through the NAIRR pilot, the United States unleashes discovery and impact and bolsters its global competitiveness.”

NVIDIA’s commitment of $30 million in technology contributions over two years is a key factor in enlarging the scale of the pilot, fueling the potential for broader achievements and accelerating the momentum toward full-scale implementation.

“The NAIRR is a vision of a national research infrastructure that will provide access to computing, data, models and software to empower researchers and communities,” said Katie Antypas, director of the Office of Advanced Cyberinfrastructure at the NSF.

“Our primary goals for the NAIRR pilot are to support fundamental AI research and domain-specific research applying AI, reach broader communities, particularly those currently unable to participate in the AI innovation ecosystem, and refine the design for the future full NAIRR,” Antypas added.

Accelerating Access to AI

“AI is increasingly defining our era, and its potential can best be fulfilled with broad access to its transformative capabilities,” said NVIDIA founder and CEO Jensen Huang.

“Partnerships are really at the core of the NAIRR pilot,” said Tess DeBlanc-Knowles, NSF’s special assistant to the director for artificial intelligence.

“It’s been incredibly impressive to see this breadth of partners come together in these 90 days, bringing together government, industry, nonprofits and philanthropies,” she added. “Our industry and nonprofit partners are bringing critical expertise and resources, which are essential to advance AI and move forward with trustworthy AI initiatives.”

NVIDIA’s collaboration with scientific centers aims to significantly scale up educational and workforce training programs, enhancing AI literacy and skill development across the scientific community.

NVIDIA will harness insights from researchers using its platform, offering an opportunity to refine and enhance the effectiveness of its technology for science, and supporting continuous advancement in AI applications.

“With NVIDIA AI software and supercomputing, the scientists, researchers and engineers of the extended NSF community will be able to utilize the world’s leading infrastructure to fuel a new generation of innovation,” Huang said.

The Foundation for Modern AI

Accelerating both AI research and research done with AI, NVIDIA’s contributions include NVIDIA DGX Cloud AI supercomputing resources and NVIDIA AI Enterprise software.

Offering full-stack accelerated computing from systems to software, NVIDIA AI provides the foundation for generative AI, with significant adoption across research and industries.

Broad Support Across the US Government

As part of this national endeavor, the NAIRR pilot brings together a coalition of government partners, showcasing a unified approach to advancing AI research.

Its partners include the U.S. National Science Foundation, U.S. Department of Agriculture, U.S. Department of Energy, U.S. Department of Veterans Affairs, National Aeronautics and Space Administration, National Institutes of Health, National Institute of Standards and Technology, National Oceanic and Atmospheric Administration, Defense Advanced Research Projects Agency, U.S. Patent and Trade Office and the U.S. Department of Defense.

The NAIRR pilot builds on the United States’ rich history of leading large-scale scientific endeavors, such as the creation of the internet, which, in turn, led to the advancement of AI.

Leading in Advanced AI

NAIRR promises to drive innovations across various sectors, from healthcare to environmental science, positioning the U.S. at the forefront of global AI advancements.

The launch meets a goal outlined in Executive Order 14110, signed by President Biden in October 2023, directing NSF to launch a pilot for the NAIRR within 90 days.

The NAIRR pilot will provide access to advanced computing, datasets, models, software, training and user support to U.S.-based researchers and educators.

“Smaller institutions, rural institutions, institutions serving underrepresented populations are key communities we’re trying to reach with the NAIRR,” said Antypas. “These communities are less likely to have resources to build their own computing or data resources.”

Paving the Way for Future Investments

As the pilot expedites the proof of concept, future investments in the NAIRR will democratize access to AI innovation and support critical work advancing the development of trustworthy AI.

The pilot will initially support AI research to advance safe, secure and trustworthy AI as well as the application of AI to challenges in healthcare and environmental and infrastructure sustainability.

Researchers can apply for initial access to NAIRR pilot resources through the NSF. The NAIRR pilot welcomes additional private-sector and nonprofit partners.

Those interested are encouraged to reach out to NSF at nairr_pilot@nsf.gov.

Read More

High Can See Clearly Now: AI-Powered NVIDIA RTX Video HDR Transforms Standard Video Into Stunning High Dynamic Range

High Can See Clearly Now: AI-Powered NVIDIA RTX Video HDR Transforms Standard Video Into Stunning High Dynamic Range

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

RTX Video HDR — first announced at CES — is now available for download through the January Studio Driver. It uses AI to transform standard dynamic range video playing in internet browsers into stunning high dynamic range (HDR) on HDR10 displays.

PC game modders now have a powerful new set of tools to use with the release of the NVIDIA RTX Remix open beta.

It features full ray tracing, NVIDIA DLSS, NVIDIA Reflex, modern physically based rendering assets and generative AI texture tools so modders can remaster games more efficiently than ever.

Pick up the new GeForce RTX 4070 Ti SUPER available from custom board partners in stock-clocked and factory-overclocked configurations to enhance creating, gaming and AI tasks.

Get creative superpowers with the GeForce RTX 4070 Ti SUPER available now.

Part of the 40 SUPER Series announced at CES, it’s equipped with more CUDA cores than the RTX 4070, a frame buffer increased to 16GB, and a 256-bit bus — perfect for video editing and rendering large 3D scenes. It runs up to 1.6x faster than the RTX 3070 Ti and 2.5x faster with DLSS 3 in the most graphics-intensive games.

And this week’s featured In the NVIDIA Studio technical artist Vishal Ranga shares his vivid 3D scene Disowned — powered by NVIDIA RTX and Unreal Engine with DLSS.

RTX Video HDR Delivers Dazzling Detail

Using the power of Tensor Cores on GeForce RTX GPUs, RTX Video HDR allows gamers and creators to maximize their HDR panel’s ability to display vivid, dynamic colors, preserving intricate details that may be inadvertently lost due to video compression.

RTX Video HDR and RTX Video Super Resolution can be used together to produce the clearest livestreamed video anywhere, anytime. These features work on Chromium-based browsers such as Google Chrome or Microsoft Edge.

To enable RTX Video HDR:

  1. Download and install the January Studio Driver.
  2. Ensure Windows HDR features are enabled by navigating to System > Display > HDR.
  3. Open the NVIDIA Control Panel and navigate to Adjust video image settings > RTX Video Enhancement — then enable HDR.

Standard dynamic range video will then automatically convert to HDR, displaying remarkably improved details and sharpness.

RTX Video HDR is among the RTX-powered apps enhancing everyday PC use, productivity, creating and gaming. NVIDIA Broadcast supercharges mics and cams; NVIDIA Canvas turns simple brushstrokes into realistic landscape images; and NVIDIA Omniverse seamlessly connects 3D apps and creative workflows. Explore exclusive Studio tools, including industry-leading NVIDIA Studio Drivers — free for RTX graphics card owners — which support the latest creative app updates, AI-powered features and more.

RTX Video HDR requires an RTX GPU connected to an HDR10-compatible monitor or TV. For additional information, check out the RTX Video FAQ.

Introducing the Remarkable RTX Remix Open Beta

Built on NVIDIA Omniverse, the RTX Remix open beta is available now.

The NVIDIA RTX open beta is out now.

It allows modders to easily capture game assets, automatically enhance materials with generative AI tools, reimagine assets via Omniverse-connected apps and Universal Scene Description (OpenUSD), and quickly create stunning RTX remasters of classic games with full ray tracing and NVIDIA DLSS technology.

RTX Remix has already delivered stunning remasters, such as Portal with RTX and the modder-made Portal: Prelude RTX. Orbifold Studios is now using the technology to develop Half-Life 2 RTX: An RTX Remix Project, a community remaster of one of the highest-rated games of all time. Check out the gameplay trailer, showcasing Orbifold Studios’ latest updates to Ravenholm:

Learn more about the RTX Remix open beta and sign up to gain access.

Leveling Up With RTX

Vishal Ranga has a decade’s worth of experience in the gaming industry, where he pursues level design.

“I’ve loved playing video games since forever, and that curiosity led me to game design,” he said. “A few years later, I found my sweet spot in technical art.”

Ranga specializes in level design.

His stunning scene Disowned was born out of experimentation with Unreal Engine’s new ray-traced global illumination lighting capabilities.

Remarkably, he skipped the concepting process — the entire project was conceived solely from Ranga’s imagination.

Applying the water shader and mocking up the lighting early helped Ranga set up the mood of the scene. He then updated old assets and searched the Unreal Engine store for new ones — what he couldn’t find, like fishing nets and custom flags, he created from scratch.

Ranga meticulously organizes assets.

“I chose a GeForce RTX GPU to use ray-traced dynamic global illumination with RTX cards for natural, more realistic light bounces.” — Vishal Ranga

Ranga’s GeForce RTX graphics card unlocked RTX-accelerated rendering for high-fidelity, interactive visualization of 3D designs during virtual production.

Next, he tackled shader work, blending in moss and muck into models of wood, nets and flags. He also created a volumetric local fog shader to complement the assets as they pass through the fog, adding greater depth to the scene.

Shaders add extraordinary depth and visual detail.

Ranga then polished everything up. He first used a water shader to add realism to reflections, surface moss and subtle waves, then tinkered with global illumination and reflection effects, along with other post-process settings.

Materials come together to deliver realism and higher visual quality.

Ranga used Unreal Engine’s internal high-resolution screenshot feature and sequencer to capture renders. This was achieved by cranking up screen resolution to 200%, resulting in crisper details.

Throughout, DLSS enhanced Ranga’s creative workflow, allowing for smooth scene movement while maintaining immaculate visual quality.

When finished with adjustments, Ranga exported the final scene in no time thanks to his RTX GPU.

 

Ranga encourages budding artists who are excited by the latest creative advances but wondering where to begin to “practice your skills, prioritize the basics.”

“Take the time to practice and really experience the highs and lows of the creation process,” he said. “And don’t forget to maintain good well-being to maximize your potential.”

3D artist Vishal Ranga.

Check out Ranga’s portfolio on ArtStation.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

NVIDIA DRIVE Partners Showcase Cutting-Edge Innovations in Automated and Autonomous Driving

NVIDIA DRIVE Partners Showcase Cutting-Edge Innovations in Automated and Autonomous Driving

The automotive industry is being transformed by the integration of cutting-edge technologies into software-defined cars.

At CES, NVIDIA invited industry leaders to share their perspectives on how technology, especially AI and computing power, is shaping the future of transportation.

Watch the video to learn more from NVIDIA’s auto partners.

Redefining Possibilities Through Partnership

Magnus Ostberg, chief software officer at Mercedes-Benz, underscores how the company’s partnership with NVIDIA helps push technological boundaries. “[NVIDIA] enables us to go further to bring automated driving to the next level and into areas that we couldn’t go before,” he says.

Computing Power: The Driving Force Behind Autonomy

Shawn Kerrigan, chief operating officer and cofounder at Plus, emphasizes the role of computing power, saying, “Autonomous technology requires immense computing power in order to really understand the world around it and make safe driving decisions.”

“What was impossible to do previously because computing wasn’t strong enough is now doable,” says Eran Ofri, CEO of Imagry. “This is an enabler for the progress of the autonomous driving industry.”

“We wanted a platform that has a track record of being deployed in the automotive industry,” adds Stefan Solyom, chief technology officer at Pebble. “This is what NVIDIA can give us.”

And Martin Kristensson, head of product strategy at Volvo Cars, says, “We partner with NVIDIA to get the best compute that we can. More compute in the car means that we can be more aware of the environment around us and reacting earlier and being even safer.”

The Critical Role of AI

Don Burnette, CEO and founder of Kodiak Robotics, states, “NVIDIA makes best-in-class hardware accelerators, and I think it’s going to play a large role in the AI developments for self-driving going forward.”

“Driving as a routine task is tedious,” adds Tony Han, CEO and cofounder of WeRide. “We want to alleviate people from the burden of driving to give back the time. NVIDIA is the backbone of our AI engine.”

And Thomas Ingenlath, CEO of Polestar, says, “Our Polestar 3 sits on the NVIDIA DRIVE platform. This is, of course, very much based on AI technology — and it’s really fascinating and a completely new era for the car.”

Simulation Is Key

Ziv Binyamini, CEO of Foretellix, highlights the role of simulation in development and verification. “Simulation is crucial for the development of autonomous systems,” he says.

Bruce Baumgartner, vice president of supply chain at Zoox, adds, “We have been leveraging NVIDIA’s technology first and foremost on-vehicle to power the Zoox driver. We also leverage NVIDIA technologies in our cloud infrastructure. In particular, we do a lot of work in our simulator.”

Saving Lives With Autonomy

Austin Russell, CEO and founder of Luminar, highlights the opportunity to save lives by using new technology, saying, “The DRIVE platform has been incredibly helpful to be able to actually enable autonomous driving capabilities as well as enhance safety capabilities on vehicles. To be able to have an opportunity to save as many as 100 million lives and 100 trillion hours of people’s time over the next 100 years — everything that we do at the company rolls up to that.”

“Knowing that [this technology] is in vehicles worldwide and saves lives on the road each and every day — the impact that you deliver as you keep people and family safe is amazingly rewarding,” adds Tal Krzypow, vice president of product and strategy at Cipia.

Technology Helps Solve Major Challenges

Shiv Tasker, global industry vice president at Capgemini, reflects on the role of technology in addressing global challenges, saying, “Our modern world is driven by technology, and yet we face tremendous challenges. Technology is the answer. We have to solve the major issues so that we leave a better place for our children and our grandchildren.”

Learn more about the NVIDIA DRIVE platform and how it’s helping industry leaders redefine transportation.

Read More

How Amazon and NVIDIA Help Sellers Create Better Product Listings With AI

How Amazon and NVIDIA Help Sellers Create Better Product Listings With AI

It’s hard to imagine an industry more competitive — or fast-paced — than online retail.

Sellers need to create attractive and informative product listings that must be engaging, capture attention and generate trust.

Amazon uses optimized containers on Amazon Elastic Compute Cloud (Amazon EC2) with NVIDIA Tensor Core GPUs to power a generative AI tool that finds this balance at the speed of modern retail.

Amazon’s new generative AI capabilities help sellers seamlessly create compelling titles, bullet points, descriptions, and product attributes.

To get started, Amazon identifies listings where content could be improved and leverages generative AI to generate high-quality content automatically. Sellers review the generated content and can provide feedback if they want to or accept the content changes to the Amazon catalog.

Previously, creating detailed product listings required significant time and effort for sellers, but this simplified process gives them more time to focus on other tasks.

The NVIDIA TensorRT-LLM software is available today on GitHub and can be accessed through NVIDIA AI Enterprise, which offers enterprise-grade security, support, and reliability for production AI.

TensorRT-LLM open-source software makes AI inference faster and smarter. It works with large language models, such as Amazon’s models for the above capabilities, which are trained on vast amounts of text.

On NVIDIA H100 Tensor Core GPUs, TensorRT-LLM enables up to an 8x speedup on foundation LLMs such as Llama 1 and 2, Falcon, Mistral, MPT, ChatGLM, Starcoder and more.

It also supports multi-GPU and multi-node inference, in-flight batching, paged attention, and Hopper Transformer Engine with FP8 precision; all of which improves latencies and efficiency for the seller experience.

By using TensorRT-LLM and NVIDIA GPUs, Amazon improved its generative AI tool’s inference efficiency in terms of cost or GPUs needed by 2x, and reduced inference latency by 3x compared with an earlier implementation without TensorRT-LLM.

The efficiency gains make it more environmentally friendly, and the 3x latency improvement makes Amazon Catalog’s generative capabilities more responsive.

The generative AI capabilities can save sellers time and provide richer information with less effort. For example, it can enrich a listing for a wireless mouse with an ergonomic design, long battery life, adjustable cursor settings, and compatibility with various devices. It can also generate product attributes such as color, size, weight, and material. These details can help customers make informed decisions and reduce returns.

With generative AI, Amazon’s sellers can quickly and easily create more engaging listings, while being more energy efficient, making it possible to reach more customers and grow their business faster.

Developers can start with TensorRT-LLM today, with enterprise support available through NVIDIA AI Enterprise.

Read More

Buried Treasure: Startup Mines Clean Energy’s Prospects With Digital Twins

Buried Treasure: Startup Mines Clean Energy’s Prospects With Digital Twins

Mark Swinnerton aims to fight climate change by transforming abandoned mines into storage tanks of renewable energy.

The CEO of startup Green Gravity is prototyping his ambitious vision in a warehouse 60 miles south of Sydney, Australia, and simulating it in NVIDIA Omniverse, a platform for building 3D workflows and applications.

The concept requires some heavy lifting. Solar and wind energy will pull steel blocks weighing as much as 30 cars each up shafts taller than a New York skyscraper, storing potential energy that can turn turbines whenever needed.

A Distributed Energy Network

Swinnerton believes it’s the optimal way to save renewable energy because nearly a million abandoned mine shafts are scattered around the globe, many of them already connected to the grid. And his mechanical system is cheaper and greener than alternatives like massive lithium batteries better suited for electric vehicles.

Mark Swinnerton, CEO Green Gravity
Mark Swinnerton

Officials in Australia, India and the U.S. are interested in the concept, and a state-owned mine operator in Romania is conducting a joint study with Green Gravity.

“We have a tremendous opportunity for repurposing a million mines,” said Swinnerton, who switched gears after a 20-year career at BHP Group, one of the world’s largest mining companies, determined to combat climate change.

A Digital-First Design

A longtime acquaintance saw an opportunity to accelerate Swinnerton’s efforts with a digital twin.

“I was fascinated by the Green Gravity idea and suggested taking a digital-first approach, using data as a differentiator,” said Daniel Keys, an IT expert and executive at xAmplify, a provider of accelerated computing services.

AI-powered simulations could speed the design and deployment of the novel concept, said Keys, who met Swinnerton 25 years earlier at one of their first jobs, flipping burgers at a fast-food stand.

Today, they’ve got a digital prototype cooking on xAmplify’s Scaile computer, based on NVIDIA DGX systems. It’s already accelerating Green Gravity’s proof of concept.

“Thanks to what we inferred with a digital twin, we’ve been able to save 40% of the costs of our physical prototype by shifting from three weights to two and moving them 10 instead of 15 meters vertically,” said Swinnerton.

Use Cases Enabled by Omniverse

It’s the first of many use cases Green Gravity is developing in Omniverse.

Once the prototype is done, the simulation will help scale the design to mines as deep as 7,000 feet, or about six Empire State Buildings stacked on top of each other. Ultimately, the team will build in Omniverse a dashboard to control and monitor sensor-studded facilities without the safety hazards of sending a person into the mine.

Green Gravity’s physical prototype and test lab.
Green Gravity’s physical prototype and test lab.

“We expect to cut tens of millions of dollars off the estimated $100 million for the first site because we can use simulations to lower our risks with banks and insurers,” said Swinnerton. “That’s a real tantalizing opportunity.”

Virtual Visualization Tools

Operators will track facilities remotely using visualization systems equipped with NVIDIA A40 GPUs and can stream their visuals to tablets thanks to the TabletAR extension in the Omniverse Spatial Framework.

xAmplify’s workflow uses a number of software components such as NVIDIA Modulus, a framework for physics-informed machine learning models.

“We also use Omniverse as a core integration fabric that lets us connect a half-dozen third-party tools operators and developers need, like Siemens PLM for sensor management and Autodesk for design,” Keys said.

Omniverse eases the job of integrating third-party applications into one 3D workflow because it’s based on the OpenUSD standard.

Along the way, AI sifts reams of data about the thousands of available mines to select optimal sites, predicting their potential for energy storage. Machine learning will also help optimize designs for each site.

Taken together, it’s a digital pathway Swinnerton believes will lead to commercial operations for Green Gravity within the next couple years.

It’s the latest customer for xAmplify’s Canberra data center serving Australian government agencies, national defense contractors and an expanding set of enterprise users with a full stack of NVIDIA accelerated software.

Learn more about how AI is transforming renewables, including wind farm optimization, solar energy generation and fusion energy.

Read More

Dino-Mite: Capcom’s ‘Exoprimal’ Joins GeForce NOW

Dino-Mite: Capcom’s ‘Exoprimal’ Joins GeForce NOW

Hold on to your seats — this GFN Thursday is unleashing dinosaurs, crowns and more in the cloud.

Catch it all on Capcom’s Exoprimal and Ubisoft’s Prince of Persia: The Lost Crown, leading 10 new games joining the GeForce NOW library this week.

Suit Up, Adapt, Survive

Exoprimal on GeForce NOW
Life finds a way.

Don cutting-edge exosuit technology and battle ferocious dinosaurs on an Earth overrun with waves of prehistoric predators. Capcom’s online team-based action game Exoprimal is now supported in the cloud.

Face velociraptors, T. rex and mutated variants called Neosaurs using the exosuit’s unique weapons and abilities. Join other players in the game’s main mode, Dino Survival, to unlock snippets and special missions from the original story, piecing together the origins of the dinosaur outbreak. Change exosuits on the fly, switching between Assault, Tank and Support roles to suit the situation.

Catch the game in the cloud this week alongside the release of Title Update 3, which brings a new mission and special Monster Hunter collaboration content, a new map, new rigs, plus the start of the third season. Ultimate members can enjoy it all at up to 4K resolution and 120 frames per second, and new players can purchase the game on Steam at 50% off for a limited time.

Return to the Sands of Time

Prince of Persia on GeForce NOW
So stylish.

Defy time and destiny to reclaim the crown and save a cursed world in Prince of Persia: The Lost Crown. It’s the newest adventure in the critically acclaimed action-adventure platformer series, available to stream in the cloud this week at the game’s PC launch.

Step into the shoes of Sargon, a legendary prince with extraordinary acrobatic skills and the power to manipulate time. Travel to Mount Qaf to rescue the kidnapped Prince Ghassan. Wield blades and various time-related powers to fight enemies and solve puzzles in a Persia-inspired world filled with larger-than-life landmarks.

Members can unleash their inner warrior with an Ultimate membership for the highest-quality streaming. Dash into the thrilling game with support for up to 4K resolution at 120 fps on PCs and Macs, streaming from GeForce RTX 4080-powered servers in the cloud.

Time for New Games

Turnip Boy’s back, allright!

In addition, members can look for the following:

  • Those Who Remain (New release on Xbox, available on PC Game Pass, Jan. 16)
  • Prince of Persia: The Lost Crown (New release on Ubisoft and Ubisoft+, Jan. 18)
  • Turnip Boy Robs a Bank (New release on Steam and Xbox, available for PC Game Pass, Jan. 18)
  • New Cycle (New release on Steam, Jan. 18)
  • Beacon Pines (Xbox, available on the Microsoft Store)
  • Exoprimal (Steam)
  • FAR: Changing Tides (Xbox, available on the Microsoft Store)
  • Going Under (Xbox, available on the Microsoft Store)
  • The Legend of Nayuta: Boundless Trails (Steam)
  • Turnip Boy Commits Tax Evasion (Xbox, available on the Microsoft Store)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

From Embers to Algorithms: How DigitalPath’s AI is Revolutionizing Wildfire Detection

From Embers to Algorithms: How DigitalPath’s AI is Revolutionizing Wildfire Detection

DigitalPath is igniting change in the Golden State — using computer vision, generative adversarial networks and a network of thousands of cameras to detect signs of fire in real time.

In the latest episode of NVIDIA’s AI Podcast, host Noah Kravtiz spoke with DigitalPath System Architect Ethan Higgins about the company’s role in the ALERTCalifornia initiative, a collaboration between California’s wildfire fighting agency CAL FIRE and the University of California, San Diego.

DigitalPath built computer vision models to process images collected from network cameras — anywhere from 8 million to 16 million a day — intelligently identifying signs of fire like smoke.

“One of the things we realized early on, though, is that it’s not necessarily a problem about just detecting a fire in a picture,” Higgins said. “It’s a process of making a manageable amount of data to handle.”

That’s because, he explained, it’s unlikely that humans will be entirely out of the loop in the detection process for the foreseeable future.

The company uses various AI algorithms to classify images based on whether they should be reviewed or acted upon — if so, an alert is sent out to a CAL FIRE command center.

One of the downsides to using computer vision to detect wildfires is that extinguishing more fires means a greater buildup of natural fuel and the potential for larger wildfires in the long term. DigitalPath and UCSD are exploring the use of high-resolution lidar data to identify where those fuels can be released in the form of prescribed burns.

Looking ahead, Higgins foresees the field tapping generative AI to accelerate new simulation tools and using AI models to analyze the output of other models to doubly improve wildfire prediction and detection.

“AI is not perfect, but when you couple multiple models together, it can get really close,” he said.

You Might Also Like

Driver’s Ed: How Waabi Uses AI Simulation to Teach Autonomous Vehicles to Drive

Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience—the road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans

Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments

Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast, Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Māori Speech AI Model Helps Preserve and Promote New Zealand Indigenous Language

Māori Speech AI Model Helps Preserve and Promote New Zealand Indigenous Language

Indigenous languages are under threat. Some 3,000 — three-quarters of the total — could disappear before the end of the century, or one every two weeks, according to UNESCO.

As part of a movement to protect such languages, New Zealand’s Te Hiku Media, a broadcaster focused on the Maori people’s indigenous language known as te reo, is using trustworthy AI to help preserve and revitalize the tongue.

Using ethical, transparent methods of speech data collection and analysis to maintain data sovereignty for the Māori people, Te Hiku Media is developing automatic speech recognition (ASR) models for te reo, which is a Polynesian language.

Built using the open-source NVIDIA NeMo toolkit for ASR and NVIDIA A100 Tensor Core GPUs, the speech-to-text models transcribe te reo with 92% accuracy. It can also transcribe bilingual speech using English and te reo with 82% accuracy. They’re pivotal tools, made by and for the Māori people, that are helping preserve and amplify their stories.

“There’s immense value in using NVIDIA’s open-source technologies to build the tools we need to ultimately achieve our mission, which is the preservation, promotion and revitalization of te reo Māori,” said Keoni Mahelona, chief technology officer at Te Hiku Media, who leads a team of data scientists and developers, as well as Māori language experts and data curators, working on the project.

“We’re also helping guide the industry on ethical ways of using data and technologies to ensure they’re used for the empowerment of marginalized communities,” added Mahelona, a Native Hawaiian now living in New Zealand.

Building a ‘House of Speech’

Te Hiku Media began more than three decades ago as a radio station aiming to ensure te reo had space on the airwaves. Over the years, the organization incorporated television broadcasting and, with the rise of the internet, it convened a meeting in 2013 with the community’s elders to form a strategy for sharing content in the digital era.

“The elders agreed that we should make the stories accessible online for our community members — rather than just keeping our archives on cassettes in boxes — but once we had that objective, the challenge was how to do this correctly, in alignment with our strong roots in valuing sovereignty,” Mahelona said.

Instead of uploading its video and audio sources to popular, global platforms — which, in their terms and conditions of use, require signing over certain rights related to the content — Te Hiku Media decided to build its own content distribution platform.

Called Whare Kōrero — meaning “house of speech” — the platform now holds more than 30 years’ worth of digitized, archival material featuring about 1,000 hours of te reo native speakers, some of whom were born in the late 19th century, as well as more recent content from second-language learners and bilingual Māori people.

Now, around 20 Māori radio stations use and upload their content to Whare Kōrero. Community members can access the content through an app.

“It’s an invaluable reproduce of acoustic data,” Mahelona said.

Turning to Trustworthy AI

Such a trove held incredible value for those working to revitalize the language, the Te Hiku Media team quickly realized, but manual transcription required pulling lots of time and effort from limited resources. So began the organization’s trustworthy AI efforts, in 2016, to accelerate its work using ASR.

“No one would have a clue that there are eight NVIDIA A100 GPUs in our derelict, rundown, musky-smelling building in the far north of New Zealand — training and building Māori language models,” Mahelona said. “But the work has been game-changing for us.”

To collect speech data in a transparent, ethically compliant, community-oriented way, Te Hiku Media began by explaining its cause to elders, garnering their support and asking them to come to the station to read phrases aloud.

“It was really important that we had the support of the elders and that we recorded their voices, because that’s the sort of content we want to transcribe,” Mahelona said. “But eventually these efforts didn’t scale — we needed second-language learners, kids, middle-aged people and a lot more speech data in general.”

So, the organization ran a crowdsourcing campaign, Kōrero Māori, to collect highly labeled speech samples according to the Kaitiakitanga license, which ensures Te Hiku Media uses the data only for the benefit of the Māori people.

In just 10 days, more than 2,500 signed up to read 200,000+ phrases, providing over 300 hours of labeled speech data, which was used to build and train the te reo Māori ASR models.

In addition to other open-source trustworthy AI tools, Te Hiku Media now uses the NVIDIA NeMo toolkit’s ASR module for speech AI throughout its entire pipeline. The NeMo toolkit comprises building blocks called neural modules and includes pretrained models for language model development.

“It’s been absolutely amazing — NVIDIA’s open-source NeMo enabled our ASR models to be bilingual and added automatic punctuation to our transcriptions,” Mahelona said.

Te Hiku Media’s ASR models are the engines running behind Kaituhi, a te reo Māori transcription service now available online.

The efforts have spurred similar ASR projects now underway by Native Hawaiians and the Mohawk people in southeastern Canada.

“It’s indigenous-led work in trustworthy AI that’s inspiring other indigenous groups to think: ‘If they can do it, we can do it, too,’” Mahelona said.

Learn more about NVIDIA-powered trustworthy AI, the NVIDIA NeMo toolkit and how it enabled a Telugu language speech AI breakthrough.

Read More

Starstruck: 3D Artist Brellias Brings Curiosity to Light This Week ‘In the NVIDIA Studio’

Starstruck: 3D Artist Brellias Brings Curiosity to Light This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Curiosity leads the way for this week’s featured In the NVIDIA Studio 3D artist, Brellias.

It’s what inspired the native Chilean’s latest artwork Estrellitas, which in English translates to “little stars.” The scene expresses the mixture of emotions that comes with curiosity, depicting a young girl holding little stars in her hand with a conflicted expression.

“She’s excited to learn about them, but she’s also a little scared,” Brellias explained.

The striking visual piece, rich with vibrant colors and expertly executed textures, underscores that while curiosity can invoke various emotions — both joyful and painful — it is always a source of change and growth.

A Sky Full of Stars

To start, Brellias first visualized and reworked an existing 3D scene of a woman in Blender. He used Blender’s built-in multi-resolution modifier for sculpting and added some shape keys to achieve the desired modifications.

He also created a custom shader for the character’s skin — a stylistic choice to lend its appearance a galactic hue.

Brellias is an especially big fan of purple, blue and maroon hues.

Next, Brellias tapped Blender’s OptiX GPU-accelerated viewport denoising, powered by his GeForce RTX GPU.

“The technology helps reduce noise and improve the quality of the viewport image more quickly, allowing me to make decisions and iterate on the render faster,” he said.

Out-of-this-world levels of detail.

Next, Brellias animated the scene using a base model from Daz Studio, a free media design software developed by Daz 3D. Daz features an AI denoiser for high-performance interactive rendering that can also be accelerated by RTX GPUs.

In addition, rig tools in Blender made the animation process easy, eliminating the need to modify file formats.

 

To animate the character’s face, Brellias tied drivers to shape keys using empties, enabling greater fluidity and control over facial expressions.

Geometry nodes bring “Estrellitas” to life.

Brellias then used geometry nodes in Blender to animate the character’s hair, giving it a magical floating effect. To light the scene, Brellias added some ambient light behind the character’s face and between its hands. His RTX GPU accelerated OptiX ray tracing in Blender’s Cycles for the fastest final-frame renders.

 

Finally, he moved to Blackmagic Design’s DaVinci Resolve to denoise and deflicker the scene for the smoothest-looking animation.

Here, Brellias’ RTX GPU accelerated the color grading, video editing and color scoping processes, dramatically speeding his creative workflow. Other RTX-accelerated AI features, including facial recognition for automatically tagging clips and the tracking of effects, were available for his use.

 

Estrellitas was partially inspired by Brellias’ own curiosity in exploring NVIDIA and GeForce RTX GPU technologies to power content creation workflows — a venture that provided rewarding results.

“Every step of my creative process involves GPU acceleration or AI in some way or another,” said Brellias. “I can’t imagine creating without a powerful GPU at my disposal.”

His curiosity in AI extends to productivity. He recently installed the NVIDIA Broadcast app, which can transform any room into a home studio.

The app has enhanced Brellias’ microphone performance by canceling external noise and echo — especially useful given his urban surroundings.

Download the Broadcast beta and explore the rest of the Studio suite of apps, including Canvas, which uses AI to turn simple brushstrokes into realistic landscape images, and RTX Remix, which allows modders to create AI-powered RTX remasters of classic games. The apps are all free for RTX GPU owners.

Digital 3D artist Brellias.

Check out Brellias’ portfolio on Instagram.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

NVIDIA CEO: ‘This Year, Every Industry Will Become a Technology Industry’

NVIDIA CEO: ‘This Year, Every Industry Will Become a Technology Industry’

“This year, every industry will become a technology industry,” NVIDIA founder and CEO Jensen Huang told attendees Wednesday during the annual J.P. Morgan Healthcare Conference.

“You can now recognize and learn the language of almost anything with structure, and you can translate it to anything with structure — so text-protein, protein-text,” Huang said in a fireside chat with Martin Chavez, partner and vice chairman of global investment firm Sixth Street Partners and board chair of Recursion, a biopharmaceutical company. “This is the generative AI revolution.”

The conversation, which took place at the historic San Francisco Mint, followed a presentation at the J.P. Morgan conference Monday by Kimberly Powell, NVIDIA’s VP of healthcare. In her talk, Powell announced that Recursion is the first hosting partner to offer a foundation model through the NVIDIA BioNeMo cloud service, which is advancing into beta this month.

She also said that Amgen, one of the first companies to employ BioNeMo, plans to advance drug discovery with generative AI and NVIDIA DGX SuperPOD — and that BioNeMo is used by a growing number of techbio companies, pharmas, AI software vendors and systems integrators. Among them are Deloitte, Innophore, Insilico Medicine, OneAngstrom, Recursion and Terray Therapeutics.

From Computer-Aided Chip Design to Drug Design

Healthcare customers and partners now consume well over a billion dollars in NVIDIA GPU computing each year — directly and indirectly through cloud partners.

Huang traced NVIDIA’s involvement in accelerated healthcare back to two research projects that caught his attention around 15 years ago: one at Mass General tapped NVIDIA GPUs to reconstruct CT images, another at the University of Illinois Urbana-Champaign applied GPU acceleration to molecular dynamics.

“It opened my mind that we could apply the same methodology that we use in computer-aided chip design to help the world of drug discovery go from computer-aided drug discovery to computer-aided drug design,” he said, realizing that, “if we scale this up by a billion times, we could simulate biology.”

After 40 years of advancements in computer-aided chip design, engineers can now build complex computing systems entirely in simulation, Huang explained. Over the next decade, the same could be true for AI-accelerated drug design.

“Almost everything will largely start in silico, largely end in silico,” he said, using a term that refers to an experiment run on a computer.

Collaborating on the Future of Drug Discovery and Medical Instruments

With the progress made to date, computer-aided drug discovery is “genuinely miraculous,” Huang said.

NVIDIA is propelling the field forward by building state-of-the-art AI models and powerful computing platforms, and by collaborating with domain experts and investing in techbio companies.

“We are determined to work with you to advance this field,” Huang said, inviting healthcare innovators to reach out to NVIDIA. “We deeply believe that this is going to be the future of the way that drugs will be discovered and designed.”

The company’s pipelines for accelerated healthcare include algorithms for cryo-electron microscopy, X-ray crystallography, gene sequencing, amino acid structure prediction and virtual drug molecule screening. And as AI advances, these computing tools are becoming much easier to access, Huang said.

“Because of artificial intelligence and the groundbreaking work that our industry has done, we have closed the technology divide in a dramatic way,” he said. “Everybody is a programmer, and the programming language of the future is called ‘human.’”

Beyond drug development, this transformation to a software-defined, AI-driven industry will also advance medical instruments.

“A medical instrument is never going to be the same again. Ultrasound systems, CT scan systems, all kinds of instruments — they’re always going to be a device plus a whole bunch of AIs,” Huang said. “The value that will create, the opportunities you create, are going to be incredible.”

For more from NVIDIA at the J.P. Morgan Healthcare Conference, listen to the audio recording and view the presentation deck of Powell’s session.

Learn about NVIDIA’s AI platform for healthcare and life sciences and subscribe to NVIDIA healthcare news.

Read More