NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AI

NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AI

Generative AI — in the form of large language model (LLM) applications like ChatGPT, image generators such as Stable Diffusion and Adobe Firefly, and game rendering techniques like NVIDIA DLSS 3 Frame Generation — is rapidly ushering in a new era of computing for productivity, content creation, gaming and more.

At the Microsoft Build developer conference, NVIDIA and Microsoft today showcased a suite of advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs to meet the demands of generative AI.

More than 400 Windows apps and games already employ AI technology, accelerated by dedicated processors on RTX GPUs called Tensor Cores. Today’s announcements, which include tools to develop AI on Windows PCs, frameworks to optimize and deploy AI, and driver performance and efficiency improvements, will empower developers to build the next generation of Windows apps with generative AI at their core.

“AI will be the single largest driver of innovation for Windows customers in the coming years,” said Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft. “By working in concert with NVIDIA on hardware and software optimizations, we’re equipping developers with a transformative, high-performance, easy-to-deploy experience.”

Develop Models With Windows Subsystem for Linux

AI development has traditionally taken place on Linux, requiring developers to either dual-boot their systems or use multiple PCs to work in their AI development OS while still accessing the breadth and depth of the Windows ecosystem.

Over the past few years, Microsoft has been building a powerful capability to run Linux directly within the Windows OS, called Windows Subsystem for Linux (WSL). NVIDIA has been working closely with Microsoft to deliver GPU acceleration and support for the entire NVIDIA AI software stack inside WSL. Now developers can use Windows PC for all their local AI development needs with support for GPU-accelerated deep learning frameworks on WSL.

With NVIDIA RTX GPUs delivering up to 48GB of RAM in desktop workstations, developers can now work with models on Windows that were previously only available on servers. The large memory also improves the performance and quality for local fine-tuning of AI models, enabling designers to customize them to their own style or content. And because the same NVIDIA AI software stack runs on NVIDIA data center GPUs, it’s easy for developers to push their models to Microsoft Azure Cloud for large training runs.

Rapidly Optimize and Deploy Models

With trained models in hand, developers need to optimize and deploy AI for target devices.

Microsoft released the Microsoft Olive toolchain for optimization and conversion of PyTorch models to ONNX, enabling developers to automatically tap into GPU hardware acceleration such as RTX Tensor Cores. Developers can optimize models via Olive and ONNX, and deploy Tensor Core-accelerated models to PC or cloud. Microsoft continues to invest in making PyTorch and related tools and frameworks work seamlessly with WSL to provide the best AI model development experience.

Improved AI Performance, Power Efficiency

Once deployed, generative AI models demand incredible inference performance. RTX Tensor Cores deliver up to 1,400 Tensor TFLOPS for AI inferencing. Over the last year, NVIDIA has worked to improve DirectML performance to take full advantage of RTX hardware.

On May 24, we’ll release our latest optimizations in Release 532.03 drivers that combine with Olive-optimized models to deliver big boosts in AI performance. Using an Olive-optimized version of the Stable Diffusion text-to-image generator with the popular Automatic1111 distribution, performance is improved over 2x with the new driver.

Chart showing performance improvements in Stable Diffusion with updated NVIDIA drivers.
Stable Diffusion performance tested on GeForce RTX 4090 using Automatic1111 and Text-to-Image function.

With AI coming to nearly every Windows application, efficiently delivering inference performance is critical — especially for laptops. Coming soon, NVIDIA will introduce new Max-Q low-power inferencing for AI-only workloads on RTX GPUs. It optimizes Tensor Core performance while keeping power consumption of the GPU as low as possible, extending battery life and maintaining a cool, quiet system.  The GPU can then dynamically scale up for maximum AI performance when the workload demands it.

Join the PC AI Revolution Now

Top software developers — like Adobe, DxO, ON1 and Topaz — have already incorporated NVIDIA AI technology with more than 400 Windows applications and games optimized for RTX Tensor Cores.

“AI, machine learning and deep learning power all Adobe applications and drive the future of creativity. Working with NVIDIA we continuously optimize AI model performance to deliver the best possible experience for our Windows users on RTX GPUs.” — Ely Greenfield, CTO of digital media at Adobe

“NVIDIA is helping to optimize our WinML model performance on RTX GPUs, which is accelerating the AI in DxO DeepPRIME, as well as providing better denoising and demosaicing, faster.” — Renaud Capolunghi, senior vice president of engineering at DxO

“Working with NVIDIA and Microsoft to accelerate our AI models running in Windows on RTX GPUs is providing a huge benefit to our audience. We’re already seeing 1.5x performance gains in our suite of AI-powered photography editing software.” — Dan Harlacher, vice president of products at ON1

“Our extensive work with NVIDIA has led to improvements across our suite of photo- and video-editing applications. With RTX GPUs, AI performance has improved drastically, enhancing the experience for users on Windows PCs.” — Suraj Raghuraman, head of AI engine development at Topaz Labs

NVIDIA and Microsoft are making several resources available for developers to test drive top generative AI models on Windows PCs. An Olive-optimized version of the Dolly 2.0 large language model is available on Hugging Face. And a PC-optimized version of NVIDIA NeMo large language model for conversational AI is coming soon to Hugging Face.

Developers can also learn how to optimize their applications end-to-end to take full advantage of GPU-acceleration via the NVIDIA AI for accelerating applications developer site.

The complementary technologies behind Microsoft’s Windows platform and NVIDIA’s dynamic AI hardware and software stack will help developers quickly and easily develop and deploy generative AI on Windows 11.

Microsoft Build runs through Thursday, May 25. Tune into to learn more on shaping the future of work with AI.

Read More

No Programmers? No Problem: READY Robotics Simplifies Robot Coding, Rollouts

No Programmers? No Problem: READY Robotics Simplifies Robot Coding, Rollouts

Robotics hardware traditionally requires programmers to deploy it. READY Robotics wants to change that with its “no code” software aimed at people working in manufacturing who haven’t got programming skills.

The Columbus, Ohio, startup is a spinout of robotics research from Johns Hopkins University. Kel Guerin was a PhD candidate there leading this research when he partnered with Benjamin Gibbs, who was at Johns Hopkins Technology Ventures, to land funding and pursue the company, now led by Gibbs as CEO.

“There was this a-ha moment where we figured out that we could take these types of visual languages that are very easy to understand and use them for robotics,” said Guerin, who’s now chief innovation officer at the startup.

READY’s  “no code” ForgeOS operating system is designed to enable anyone to program any type of robot hardware or automation device. ForgeOS works seamlessly with plug-ins for most major robot hardware, and similar to other operating systems, like Android, it allows running third-party apps and plugins, providing a robust ecosystem of partners and developers working to make robots more capable, says Guerin.

Implementing apps in robotics allows for new capabilities to be added to a robotic system in a few clicks, improving user experience and usability. Users can install their own apps, such as Task Canvas, which provides an intuitive building block programming interface similar to Scratch, a simple block-based visual language for kids developed at MIT Media Lab, which was influential in its design.

Task Canvas allows users to show the actions of the robot, as well as all the other devices in an automation cell (such as grippers, programmable logic controllers, and machine tools) as blocks in a flow chart. The user can easily create powerful logic by tying these blocks together — without writing a single line of code. The interface offers nonprogrammers a more “drag-and-drop” experience for programming and deploying robots, whether working directly on the factory floor with real robots on a tablet device or with access to simulation from Isaac Sim, powered by NVIDIA Omniverse.

 

Robot System Design in Simulation for Real-World Deployments 

READY is making robotics system design easier for nonprogrammers, helping to validate robots and systems for accelerated deployments.

The company is developing Omniverse Extensions — Omniverse kit applications based on Isaac Sim — and can deploy them on the cloud. It uses Omniverse Nucleus — the platform’s database and collaboration engine — in the cloud as well.

Isaac Sim is an application framework that enables simulation training for testing out robots in virtual manufacturing lines before deployment into the real world.

“Bigger companies are moving to a sim-first approach to automation because these systems cost a lot of money to install. They want to simulate them first to make sure it’s worth the investment,” said Guerin.

The startup charges users of its platform licensing per software seat and also offers support services to help roll out and develop systems.

It’s a huge opportunity. Roughly 90 percent of the world’s factories haven’t yet embraced automation, which is a trillion-dollar market.

READY is a member of NVIDIA Inception, a free program that provides startups with technical training, go-to-market support and AI platform guidance.

From Industrial Automation Giants to Stanley Black & Decker

The startup operates in an ecosystem of world-leading industrial automation providers, and these global partners are actively developing integrations with platforms like NVIDIA Omniverse and are investing in READY, said Guerin.

“Right now we are starting to work with large enterprise customers who want to automate but they can’t find the expertise to do it,” he said.

Stanley Black & Decker, a global supplier of tools, is relying on READY to automate machines, including CNC lathes and mills.

Robotic automation had been hard to deploy in their factory until Stanley Black & Decker started using READY’s ForgeOS with its Station setup, which makes it possible to deploy robots in a day.

Creating Drag-and-Drop Robotic Systems in Simulation 

READY is putting simulation capabilities into the hands of nonprogrammers, who can learn its Task Canvas interface for drag-and-drop programming of industrial robots in about an hour, according to the company.

The company also runs READY Academy, which offers a catalog of free training for manufacturing professionals to learn the skills to design, deploy, manage and troubleshoot robotic automation systems.

“For potential customers interested in our technology, being able to try it out with a robot simulated in Omniverse before they get their hands on the real thing — that’s something we’re really excited about,” said Guerin.

Learn more about NVIDIA Isaac Sim, Jetson Orin, Omniverse Enterprise.

 

Read More

Privateer Space: The Final Frontier in AI Space Junk Management

Privateer Space: The Final Frontier in AI Space Junk Management

It’s time to take out the space trash.

In this episode of the NVIDIA AI Podcast, host Noah Kravitz dives into an illuminating conversation with Alex Fielding, co-founder and CEO of Privateer Space.

Fielding is a tech industry veteran, having previously worked alongside Apple co-founder Steve Wozniak on several projects, and holds a deep expertise in engineering, robotics, machine learning and AI.

Privateer Space, Fielding’s latest venture, aims to address one of the most daunting challenges facing our world today: space debris.

The company is creating a data infrastructure to monitor and clean up space debris, ensuring sustainable growth for the budding space economy. In essence, they’re the sanitation engineers of the cosmos.

Privateer Space is also a part of NVIDIA Inception, a free program that offers go-to-market support, expertise and technology for AI startups.

During the podcast, Fielding shares the genesis of Privateer Space, his journey from Apple to the space industry, and his subsequent work on communication between satellites at different altitudes.

He also addresses the severity of space debris, explaining how every launch adds more debris, including minute yet potentially dangerous fragments like frozen propellant and paint chips.

Tune in to the podcast for more on what the future holds for the intersection of AI and space.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games

A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry

Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs

Luis Voloch, co-founder and chief technology officer of Immunai, talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

What’s Up? Watts Down — More Science, Less Energy

What’s Up? Watts Down — More Science, Less Energy

People agree: accelerated computing is energy-efficient computing.

The National Energy Research Scientific Computing Center (NERSC), the U.S. Department of Energy’s lead facility for open science, measured results across four of its key high performance computing and AI applications.

They clocked how fast the applications ran and how much energy they consumed on CPU-only and GPU-accelerated nodes on Perlmutter, one of the world’s largest supercomputers using NVIDIA GPUs.

The results were clear. Accelerated with NVIDIA A100 Tensor Core GPUs, energy efficiency rose 5x on average. An application for weather forecasting logged gains of 9.8x.

GPUs Save Megawatts

On a server with four A100 GPUs, NERSC got up to 12x speedups over a dual-socket x86 server.

That means, at the same performance level, the GPU-accelerated system would consume 588 megawatt-hours less energy per month than a CPU-only system. Running the same workload on a four-way NVIDIA A100 cloud instance for a month, researchers could save more than $4 million compared to a CPU-only instance.

Measuring Real-World Applications

The results are significant because they’re based on measurements of real-world applications, not synthetic benchmarks.

The gains mean that the 8,000+ scientists using Perlmutter can tackle bigger challenges, opening the door to more breakthroughs.

Among the many use cases for the more than 7,100 A100 GPUs on Perlmutter, scientists are probing subatomic interactions to find new green energy sources.

Advancing Science at Every Scale

The applications NERSC tested span molecular dynamics, material science and weather forecasting.

For example, MILC simulates the fundamental forces that hold particles together in an atom. It’s used to advance quantum computing, study dark matter and search for the origins of the universe.

BerkeleyGW helps simulate and predict optical properties of materials and nanostructures, a key step toward developing more efficient batteries and electronic devices.

NERSC apps get efficiency gains with accelerated computing.
NERSC apps get efficiency gains with accelerated computing.

EXAALT, which got an 8.5x efficiency gain on A100 GPUs, solves a fundamental challenge in molecular dynamics. It lets researchers simulate the equivalent of short videos of atomic movements rather than the sequences of snapshots other tools provide.

The fourth application in the tests, DeepCAM, is used to detect hurricanes and atmospheric rivers in climate data. It got a 9.8x gain in energy efficiency when accelerated with A100 GPUs.

The overall 5x speedup is based on a mix of HPC and AI applications.
The overall 5x speedup is based on a mix of HPC and AI applications.

Savings With Accelerated Computing

The NERSC results echo earlier calculations of the potential savings with accelerated computing. For example, in a separate analysis NVIDIA conducted, GPUs delivered 42x better energy efficiency on AI inference than CPUs.

That means switching all the CPU-only servers running AI worldwide to GPU-accelerated systems could save a whopping 10 trillion watt-hours of energy a year. That’s like saving the energy 1.4 million homes consume in a year.

Accelerating the Enterprise

You don’t have to be a scientist to get gains in energy efficiency with accelerated computing.

Pharmaceutical companies are using GPU-accelerated simulation and AI to speed the process of drug discovery. Carmakers like BMW Group are using it to model entire factories.

They’re among the growing ranks of enterprises at the forefront of what NVIDIA founder and CEO Jensen Huang calls an industrial HPC revolution, fueled by accelerated computing and AI.

 

Read More

NVIDIA Cambridge-1 AI Supercomputer Expands Reach to Researchers via the Cloud

NVIDIA Cambridge-1 AI Supercomputer Expands Reach to Researchers via the Cloud

Scientific researchers need massive computational resources that can support exploration wherever it happens. Whether they’re conducting groundbreaking pharmaceutical research, exploring alternative  energy sources or discovering new ways to prevent financial fraud, accessible state-of-the-art AI computing resources are key to driving innovation. This new model of computing can solve the challenges of generative AI and power the next wave of innovation.

Cambridge-1, a supercomputer NVIDIA launched in the U.K. during the pandemic, has powered discoveries from some of the country’s top healthcare researchers. The system is now becoming part of NVIDIA DGX Cloud to accelerate the pace of scientific innovation and discovery — across almost every industry.

As a cloud-based resource, it will broaden access to AI supercomputing for researchers in climate science, autonomous machines, worker safety and other areas, delivered with the simplicity and speed of the cloud, ideally located for the U.K. and European access.

DGX Cloud is a multinode AI training service that makes it possible for any enterprise to access leading-edge supercomputing resources from a browser. The original Cambridge-1 infrastructure included 80 NVIDIA DGX systems; now it will join with DGX Cloud, to allow customers access to world-class infrastructure.

History of Healthcare Insights

Academia, startups and the UK’s large pharma ecosystem used the Cambridge-1 supercomputing resource to accelerate research and design new approaches to drug discovery, genomics and medical imaging with generative AI in some of the following ways:

  • InstaDeep, in collaboration with NVIDIA and the Technical University of Munich Lab, developed a 2.5 billion-parameter LLM for genomics on Cambridge-1. This project aimed to create a more accurate model for predicting the properties of DNA sequences.
  • King’s College London used Cambridge-1 to create 100,000 synthetic brain images — and made them available for free to healthcare researchers. Using the open-source AI imaging platform MONAI, the researchers at King’s created realistic, high-resolution 3D images of human brains, training in weeks versus months.
  • Oxford Nanopore used Cambridge-1 to quickly develop highly accurate, efficient models for base calling in DNA sequencing. The company also used the supercomputer to support inference for the ORG.one project, which aims to enable DNA sequencing of critically endangered species
  • Peptone, in collaboration with a pharma partner, used Cambridge-1 to run physics-based simulations to evaluate the effect of mutations on protein dynamics with the goal of better understanding why specific antibodies work efficiently. This research could improve antibody development and biologics discovery.
  • Relation Therapeutics developed a large language model which reads DNA to better understand genes, which is a key step to creating new medicines. Their research takes us a step closer to understanding how genes are controlled in certain diseases.

Read More

Beyond Fast: GeForce RTX 4060 GPU Family Gives Creators More Options to Accelerate Workflows, Starting at $299

Beyond Fast: GeForce RTX 4060 GPU Family Gives Creators More Options to Accelerate Workflows, Starting at $299

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The GeForce RTX 4060 family will be available starting next week, bringing massive creator benefits to the popular 60-class GPUs.

The latest GPUs in the 40 Series come backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 of the most popular creative apps; and exclusive Studio apps like Omniverse, Broadcast and Canvas.

Real-time ray-tracing renderer D5 Render introduced support for NVIDIA DLSS 3 technology, enabling super smooth real-time rendering experiences, so creators can work with larger scenes without sacrificing speed or interactivity.

Plus, the new Into the Omniverse series highlights the latest advancements to NVIDIA Omniverse, a platform furthering the evolution of the metaverse with the OpenUSD framework. The series showcases how artists, developers and enterprises can use the open development platform to transform their 3D workflows. The first installment highlights an update coming soon to the Adobe Substance 3D Painter Connector.

In addition, NVIDIA 3D artist Daniel Barnes returns this week In the NVIDIA Studio to share his mesmerizing, whimsical animation, Wormhole 00527.

Beyond Fast

The GeForce RTX 4060 family is powered by the ultra-efficient NVIDIA Ada Lovelace architecture with fourth-generation Tensor Cores for AI content creation, third-generation RT Cores and compatibility with DLSS 3 for ultra-fast 3D rendering, as well as the eighth-generation NVIDIA encoder (NVENC), now with support for AV1.

The GeForce RTX 4060 Ti GPU.

3D modelers can build and edit realistic 3D models in real time, up to 45% faster than the previous generation, thanks to third-generation RT Cores, DLSS 3 and the NVIDIA Omniverse platform.

Tested on GeForce RTX 4060 and 3060 GPUs. Maya with Arnold 2022 (7.1.1) measures render time of NVIDIA SOL 3D model. DaVinci Resolve measures FPS applying Magic Mask effect “Faster” quality setting to 4K resolution. ON1 Resize AI measures time required to apply effect to batch of 10 photos. Time measurement is normalized for easier comparison across tests.

Video editors specializing in Adobe Premiere Pro, Blackmagic Design’s DaVinci Resolve and more have at their disposal a variety of AI-powered effects, such as auto-reframe, magic mask and depth estimation. Fourth-generation Tensor Cores seamlessly hyper-accelerate these effects, so creators can stay in their flow states.

Broadcasters can jump into next-generation livestreaming with the eighth-generation NVENC with support for AV1. The new encoder is 40% more efficient, making livestreams appear as if there were a 40% increase in bitrate — a big boost in image quality that enables 4K streaming on apps like OBS Studio and platforms such as YouTube and Discord.

10 Mbps with default OBS streaming settings.

NVENC boasts the most efficient hardware encoding available, providing significantly better quality than other GPUs. At the same bitrate, images will look better, sharper and have less artifacts, like in the example above.

Encode quality comparison, measured with BD-BR.

Creators are embracing AI en masse. DLSS 3 multiplies frame rates in popular 3D apps. ON1 ResizeAI, software that enables high-quality photo enlargement, is sped up 24% compared with last-generation hardware. DaVinci Resolve’s AI Magic Mask feature saves video editors considerable time automating the highly manual process of rotoscoping, carried out 20% faster than the previous generation.

The GeForce RTX 4060 Ti (8GB) will be available starting Wednesday, May 24, at $399. The GeForce RTX 4060 Ti (16GB) will be available in July, starting at $499. GeForce RTX 4060 will also be available in July, starting at $299.

Visit the Studio Shop for GeForce RTX 4060-powered NVIDIA Studio systems when available, and explore the range of high-performance Studio products.

D5 Render, DLSS 3 Combine to Beautiful Effect

D5 Render adds support for NVIDIA DLSS 3, bringing a vastly improved real-time experience to architects, designers, interior designers and 3D artists.

Such professionals want to navigate scenes smoothly while editing, and demonstrate their creations to clients in the highest quality. Scenes can be incredibly detailed and complex, making it difficult to maintain high real-time viewport frame rates and present in original quality.

D5 is coveted by many artists for its global illumination technology, called D5 GI, which delivers high-quality lighting and shading effects in real time, without sacrificing workflow efficiency.

D5 Render and DLSS 3 work brilliantly to create photorealistic imagery.

By integrating DLSS 3, which combines AI-powered DLSS Frame Generation and Super Resolution technologies, real-time viewport frame rates increase up to 3x, making creator experiences buttery smooth. This allows designers to deal with larger scenes, higher-quality models and textures — all in real time — while maintaining a smooth, interactive viewport.

Learn more about the update.

Venture ‘Into the Omniverse’

NVIDIA Omniverse is a key component of the NVIDIA Studio platform and the future of collaborative 3D content creation.

A new monthly blog series, Into the Omniverse, showcases how artists, developers and enterprises can transform their creative workflows using the latest Omniverse advancements.

This month, 3D creators across industries are set to benefit from the pairing of Omniverse and the Adobe Substance 3D suite of creative tools.

“End of Summer,” created by the Adobe Substance 3D art and development team, built in Omniverse.

An upcoming update to the Omniverse Connector for Adobe Substance 3D Painter will dramatically increase flexibility for users, with new capabilities including an export feature using Universal Scene Description (OpenUSD), an open, extensible file framework enabling non-destructive workflows and collaboration in scene creation.

Find details in the blog and check in every month for more Omniverse news.

Your Last Worm-ing

NVIDIA 3D artist Daniel Barnes has a simple initial approach to his work: sketch until something seems cool enough to act on. While his piece Wormhole 00527 was no exception to this usual process, an emotional component made a significant impact on it.

 

“After the pandemic and various global events, I took even more interest in spaceships and escape pods,” said Barnes. “It was just an abstract form of escapism that really played on the idea of ‘get me out of here,’ which I think we all experienced at one point, being inside so much.”

Barnes imagined Wormhole 00527 to comprise each blur one might pass by as an alternate star system — a place on the other side of the galaxy where things are really similar but more peaceful, he said. “An alternate Earth of sorts,” the artist added.

Sculpting on his tablet one night in the Nomad app, Barnes imported a primitive model into Autodesk Maya for further refinement. He retopologized the scene, converting high-resolution models into much smaller files that can be used for animation.

Modeling in Autodesk Maya.

“I’ve been creating in 3D for over a decade now, and GeForce RTX graphics cards have been able to power multiple displays smoothly and run my 3D software viewports at great speeds. Plus, rendering in real time on some projects is great for fast development.” — Daniel Barnes

Barnes then took a screenshot, further sketched out his modeling edits and made lighting decisions in Adobe Photoshop.

His GeForce RTX 4090 GPU gives him access to over 30 GPU-accelerated features for quickly, smoothly modifying and adjusting images. These features include blur gallery, object selection and perspective warp.

Back in Autodesk Maya, Barnes used the quad-draw tool — a streamlined, one-tool workflow for retopologizing meshes — to create geometry, adding break-in panels that would be advantageous for animating.

So this is what a wormhole looks like.

Barnes used Chaos V-Ray with Autodesk Maya’s Z-depth feature, which provides information about each object’s distance from the camera in its current view. Each pixel representing the object is evaluated for distance individually — meaning different pixels for the same object can have varying grayscale values. This made it far easier for Barnes to tweak depth of field and add motion-blur effects.

Example of Z-depth. Image courtesy of Chaos V-Ray with Autodesk Maya.

He also added a combination of lights and applied materials with ease. Deploying RTX-accelerated ray tracing and AI denoising with the default Autodesk Arnold renderer enabled smooth movement in the viewport, resulting in beautifully photorealistic renders.

The Z-depth feature made it easier to apply motion-blur effects.

He finished the project by compositing in Adobe After Effects, using GPU-accelerated features for faster rendering with NVIDIA CUDA technology.

3D artist Daniel Barnes.

When asked what his favorite creative tools are, Barnes didn’t hesitate. “Definitely my RTX cards and nice large displays!” he said.

Check out Barnes’ portfolio on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources. Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter

For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More

First Xbox Title Joins GeForce NOW

First Xbox Title Joins GeForce NOW

Get ready for action — the first Xbox game title is now streaming from GeForce GPUs in the cloud directly to GeForce NOW members, with more to come later this month.

Gears 5 comes to the service this GFN Thursday. Keep reading to find out what other entries from the Xbox library will be streaming on GeForce NOW soon.

Also, time’s almost up on an exclusive discount for six-month GeForce NOW Priority memberships. Sign up today to save 40% before the offer ends on Sunday, May 21.

All Geared Up

Gears 5 on GeForce NOW
The gang’s all here.

NVIDIA and Microsoft have been working together to bring the first Xbox PC titles to the GeForce NOW library. With their gaming fueled by GeForce GPU servers in the cloud, members can access the best of Xbox Game Studios and Bethesda titles across nearly any device, including underpowered PCs, Macs, iOS and Android mobile devices, NVIDIA SHIELD TV, supported smart TVs and more.

Gears 5 from The Coalition is the first PC title from Xbox Game Studios to hit GeForce NOW. The latest entry in the Gears saga includes an acclaimed campaign playable solo or cooperatively, plus a variety of PvE and PvP modes to team up and battle in.

More Microsoft titles will follow shortly, starting with Deathloop, Grounded and Pentiment on Thursday, May 25.

Members will be able to stream these Xbox PC hits purchased through Steam on PCs, macOS devices, Chromebooks, smartphones and other devices. Support for Microsoft Store will become available in the coming months. Learn more about Xbox PC game support on GeForce NOW.

GeForce NOW Priority members can skip the wait and play Gears 5 or one of the other 1,600+ supported titles at 1080p 60 frames per second. Or go Ultimate for an upgraded experience, playing at up to 4K 120 fps for gorgeous graphics, or up to 240 fps for ultra-low latency that gives the competitive edge.

Microsoft on GeForce NOW
Like peanut butter and jelly.

GeForce NOW members will see more PC games from Xbox added regularly and can keep up with the latest news and release dates through GFN Thursday updates.

Green Light Special

The latest GeForce NOW app updates are rolling out now. Version 2.0.52 brings a few fit-and-finish updates for members, including a new way to easily catch game discounts, content and more.

Wall of Games GeForce NOW
Look for the latest deals, downloadable content and more in the latest GeForce NOW app update.

Promotional tags can be found on featured games throughout the app on PC and macOS. The tags are curated to highlight the most compelling offers available on the 1,600+ GeForce NOW-supported games. Keep an eye out for these promotional tags, which showcase new downloadable content, discounts, free games and more.

The update also includes in-app search improvements, surround-sound support in the browser experience on Windows and macOS, updated in-game button prompts for members using DualShock 4 and DualSense controllers, and more. Check out the in-app release highlights for more info.

Play for Today

Outlast Trials on GeForce NOW
They say things aren’t so scary when you’re with friends. ‘The Outlast Trials’ aims to prove them wrong.

Don’t get spooked in The Outlast Trials, newly supported this week on GeForce NOW. Go it alone or team up in this multiplayer edition of the survival horror franchise. Avoid the monstrosities waiting in the Murkoff experiments while using new tools to aid stealth, create opportunities to flee, slow enemies and more.

With support for more games every week, there’s always a new adventure around the corner. Here’s this week’s additions:

  • Tin Hearts (New release on Steam, May 16)
  • The Outlast Trials (New release on Steam, May 18)
  • Gears 5 (Steam)

With the weekend kicking off, what are you gearing up to play? Let us know on Twitter or in the comments below.

Read More

Into the Omniverse: Adobe Substance 3D, NVIDIA Omniverse Enhance Creative Freedom Within 3D Workflows

Into the Omniverse: Adobe Substance 3D, NVIDIA Omniverse Enhance Creative Freedom Within 3D Workflows

Editor’s note: This is the first installment of our monthly Into the Omniverse series, which highlights the latest advancements to NVIDIA Omniverse furthering the evolution of the metaverse with the OpenUSD framework, and showcases how artists, developers and enterprises can transform their workflows with the platform.

An update to the Omniverse Connector for Adobe Substance 3D Painter will save 3D creators across industries significant time and effort. New capabilities include an export feature using Universal Scene Description (OpenUSD), an open, extensible file framework enabling non-destructive workflows and collaboration in scene creation.

Benjamin Samar, technical director of video production company Elara Systems, is using the Adobe Substance 3D Painter Connector to provide a “uniquely human approach to an otherwise clinical discussion,” he said.

Samar and his team tapped the Connector to create an animated public-awareness video for sickle cell disease. The video aims to help adolescents experiencing sickle cell disease understand the importance of quickly telling an adult or a medical professional if they’re experiencing symptoms.

According to Samar, the Adobe Substance 3D Painter Connector for Omniverse was especially useful for setting up all of the video’s environments and characters — before bringing them into the USD Composer app for scene composition and real-time RTX rendering of the high-quality visuals.

“By using this Connector, materials were automatically imported, converted to Material Definition Language and ready to go inside USD Composer with a single click,” he said.

The Adobe Substance 3D Art and Development team itself uses Omniverse in their workflows. Their End of Summer project fostered collaboration and creativity among the Adobe artists in Omniverse, and resulted in stunningly rich and realistic visuals.

Learn more about how they used Adobe Substance 3D tools with Unreal Engine 5 and Omniverse in this on-demand NVIDIA GTC session, and get an exclusive behind-the-scenes look at Adobe’s NVIDIA Studio-accelerated workflows in the making of this project.

Plus, technical artists are using Adobe Substance 3D and Omniverse to create scratches and other defects on 3D objects to train vision AI models.

 

Adobe and Omniverse workflows offer creators improved efficiency and flexibility — whether they’re training AI models, animating an educational video to improve medical knowledge or bringing a warm summer scene to life.

And soon, the next release of the Adobe Substance 3D Painter Connector for Omniverse will further streamline their creative processes.

Connecting the Dots for a More Seamless Workflow

Version 203.0 of the Adobe Substance 3D Painter Connector for Omniverse, coming mid-June, will offer new capabilities that enable more seamless workflows.

Substance 3D Painter’s new OpenUSD export feature, compatible with version 8.3.0 of the app and above, allows users to export textures using any built-in or user-defined preset to dynamically build OmniPBR shaders — programs that calculate the appropriate levels of light, darkness and color during 3D rendering — in USD Composer.

To further speed and ease workflows, the Connector update will remove “rotating texture folders,” uniquely generated temporary directories that textures were exported to with each brush stroke.

With each change the artist makes, textures will now save over the same path, greatly speeding the process for locally saved projects.

Get Plugged Into the Omniverse

Discover the latest in AI, graphics and more by watching NVIDIA founder and CEO Jensen Huang’s COMPUTEX keynote on Sunday, May 28, at 8 p.m. PT.

#SetTheScene for your Adobe and Omniverse workflow by joining the latest community challenge. Share your best 3D environments on social media with the #SetTheScene hashtag for a chance to be featured on channels for NVIDIA Omniverse (Twitter, LinkedIn, Instagram) and NVIDIA Studio (Twitter, Facebook, Instagram).

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Adobe Substance 3D art and development team.

Read More

Mammoth Mission: How Colossal Biosciences Aims to ‘De-Extinct’ the Woolly Mammoth

Mammoth Mission: How Colossal Biosciences Aims to ‘De-Extinct’ the Woolly Mammoth

Ten thousand years after the last woolly mammoths vanished with the last Ice Age, a team of computational biologists is on a mission to bring them back within five years.

Led by synthetic biology pioneer George Church, Colossal Biosciences is also seeking to return the dodo bird and Tasmanian tiger, as well as help save current-day endangered species.

“The woolly mammoth is a very iconic species to bring back,” said Eriona Hysolli, head of biological sciences at Colossal Biosciences, which is based in Austin, Texas. “In addition, we see that pipeline as a proxy for conservation, given that elephants are endangered and much of this work directly benefits them.”

There’s plenty of work to be done on endangered species, as well.

Critically endangered, the African forest elephant has declined by nearly 90% in the past three decades, according to Colossal. Poaching took more than 100,000 African elephants between 2010 and 2012 alone, according to the company.

“We might lose these elephant species in our lifetime if their numbers continue to dwindle,” said Hysolli.

Humans caused the extinction of many species, but computational biologists are now trying to bring them back with CRISPR and other gene-editing technologies, leaps in AI, and bioinformatics tools and technology, such as the NVIDIA Parabricks software suite for genomic analysis.

To bring back a woolly mammoth, scientists at Colossal start with mammoth and elephant genome sequencing and identify what makes them similar and different. Then they use Asian elephant cells to engineer mammoth changes responsible for cold adaptation traits, transferring the nuclei of edited cells into elephant enucleated eggs before implanting them into a healthy Asian elephant surrogate.

Tech Advances Drive Genomics Leaps 

It took enormous effort over two decades, not to mention $3 billion in funding, to first sequence the human genome. But that’s now been reduced to mere hours and under $200 per whole genome, thanks to the transformative impact of AI and accelerated computing.

It’s a story well known to Colossal co-founder Church. The Harvard Medical School professor and co-founder of roughly 50 biotech startups has been at the forefront of genetics research for decades.

“There’s been about a 20 millionfold reduction in price, and a similar improvement in quality in a little over a decade, or a decade and a half,” Church said in a recent interview on the TWiT podcast.

Research to Complete Reference Genome Puzzle

Colossal’s work to build a reference genome of the woolly mammoth is similar to trying to complete a puzzle.

DNA sequences from bone samples are assembled in silico. But degradation of the DNA over time means that not all the pieces are there. The gaps to be filled can be guided with the genome from an Asian elephant, the closest living relative for the mammoth.

Once a rough representative genome sequence is configured, secondary analysis takes place, which is where GPU acceleration with Parabricks comes in.

The suite of bioinformatic tools in Parabricks can provide more than 100x acceleration of industry-standard tools used for alignment and variant calling. In the alignment step, the short fragments, or reads, from the sequenced sample are aligned in the correct order, using the reference genome, which in this case is the genome of the Asian elephant. Then, in the variant-calling step, Parabricks tools identify the variants, or differences, between the sequenced whole genome mammoth samples and the Asian elephant reference.

In September, Colossal Biosciences spun out Form Bio, which offers a breakthrough computational life sciences platform, to aid its efforts and commercialize scientific innovations. Form Bio is a member of NVIDIA Inception, a program that provides companies with technology support and AI platforms guidance.

Parabricks includes some of the same tools as the open-source ones that Form Bio was using, making it easy to replace them with NVIDIA GPU-accelerated versions of those tools, said Brandi Cantarel, vice president of bioinformatics at Form Bio.

Compared with the open-source software on CPUs, Parabricks running on GPUs enables Colossal to complete their end-to-end sequence analysis 12x faster and at one-quarter the cost, accelerating the research.

“We’re getting very comparable or exactly the same outputs, and it was faster and cheaper,” said Cantarel.

Analysis Targeting Cold Tolerance for Woolly Mammoth 

A lot is at stake in the sequencing and analysis.

The Form Bio platform hosts tools that can assess whether researchers make the right CRISPR edits and assist in analysis for whether cells are edited.

“Can we identify what are the targets that we need to actually go after and edit and engineer? The answer is absolutely yes, and we’ve gotten very good at selecting impactful genetic differences,” said Hysolli.

Another factor to consider is human contamination to samples. So for each sample researchers examine, they must do analysis against human cell references to discard those contaminants.

Scientists have gathered multiple specimens of woolly mammoths over the years, and the best are tooth or bone samples found in permafrost. “We benefit from the fact that woolly mammoths were well-preserved because they lived in an Arctic environment,” said Hysolli.

An Asian elephant is 99.6% the same as a mammoth genetically, according Ben Lamm,  Colossal CEO and co-founder.

“We’re just targeting about 65 genes that represent the cold tolerance, the core phenotypes that we’re looking for,” he recently said on stage at South by Southwest in Austin.

Benefits to Biodiversity, Conservation and Humanity

Colossal aims to create reference genomes for species, like the mammoth, that represent broad population samples. They’re looking at mammoths from different regions of the globe and periods in time. And it’s necessary to parse the biodiversity and do more sequencing, according to researchers at the company.

“As we lose biodiversity, it’s important to bring back or restore species and their ecosystems, which in turn positively impacts ecology and supports conservation,” said Hysolli.

Population genetics is important. Researchers need to understand how different and similar these animals are to each other so that in the future they can create thriving populations, she said.

That ensures better chances of survival. “We need to make sure — that’s what makes a thriving population when you rewild,” said Hysolli, referring to when the team introduces the species back into an Arctic habitat.

It’s also been discovered that elephants are more resistant to cancer — so researchers are looking at the genetic factors and how that might translate for humans.

“This work does not only benefit Colossal’s de-extinction efforts and conservation, but these technologies we build can be applied to bettering human health and treating diseases,” said Hysolli.

Learn more about NVIDIA Parabricks for accelerated genomic sequencing analysis.

Read More

Chip Manufacturing ‘Ideal Application’ for AI, NVIDIA CEO Says

Chip Manufacturing ‘Ideal Application’ for AI, NVIDIA CEO Says

Chip manufacturing is an “ideal application” for NVIDIA accelerated and AI computing, NVIDIA founder and CEO Jensen Huang said Tuesday.

Detailing how the latest advancements in computing are accelerating “the world’s most important industry,” Huang spoke at ITF World 2023 semiconductor conference in Antwerp, Belgium.

Huang delivered his remarks via video to a gathering of leaders from across the semiconductor, technology and communications industries.

“I am thrilled to see NVIDIA accelerated computing and AI in service of the world’s chipmaking industry,” Huang said as he detailed how advancements in accelerated computing, AI and semiconductor manufacturing intersect.

AI, Accelerated Computing Step Up

The exponential performance increase of the CPU has been the governing dynamic of the technology industry for nearly four decades, Huang said.

But over the past few years CPU design has matured, he said. The rate at which semiconductors become more powerful and efficient is slowing, even as demand for computing capability soars.

“As a result, global demand for cloud computing is causing data center power consumption to skyrocket,” Huang said.

Huang said that striving for net zero while supporting the “invaluable benefits” of more computing power requires a new approach.

The challenge is a natural fit for NVIDIA, which pioneered accelerated computing, coupling the parallel processing capabilities of GPUs with CPUs.

This acceleration, in turn, sparked the AI revolution. A decade ago, deep learning researchers such as Alex Krizhevsky, Ilya Sutskever and Geoffrey Hinton discovered that GPUs could be cost-effective supercomputers.

Since then, NVIDIA reinvented its computing stack for deep learning, opening up “multi trillion-dollar opportunities in robotics, autonomous vehicles and manufacturing,” Huang said.

By offloading and accelerating compute-intensive algorithms, NVIDIA routinely speeds up applications by 10-100x while reducing power and cost by an order of magnitude, Huang explained.

Together, AI and accelerated computing are transforming the technology industry. “We are experiencing two simultaneous platform transitions — accelerated computing and generative AI,” Huang said.

AI, Accelerated Computing Come to Chip Manufacturing

Huang explained that advanced chip manufacturing requires over 1,000 steps, producing features the size of a biomolecule. Each step must be nearly perfect to yield functional output.

“Sophisticated computational sciences are performed at every stage to compute the features to be patterned and to do defect detection for in-line process control,” Huang said. “Chip manufacturing is an ideal application for NVIDIA accelerated and AI computing.”

Huang outlined several examples of how NVIDIA GPUs are becoming increasingly integral to chip manufacturing.

Companies like D2S, IMS Nanofabrication, and NuFlare build mask writers — machines that create photomasks, stencils that transfer patterns onto wafers — using electron beams. NVIDIA GPUs accelerate the computationally demanding tasks of pattern rendering and mask process correction for these mask writers.

Semiconductor manufacturer TSMC and equipment providers KLA and Lasertech use extreme ultraviolet light, known as EUV, and deep ultraviolet light, or DUV, for mask inspection. NVIDIA GPUs play a crucial role here, too, in processing classical physics modeling and deep learning to generate synthetic reference images and detect defects.

KLA, Applied Materials, and Hitachi High-Tech use NVIDIA GPUs in their e-beam and optical wafer inspection and review systems.

And in March, NVIDIA announced that it is working with TSMC, ASML and Synopsys to accelerate computational lithography.

Computational lithography simulates Maxwell’s equations of light behavior passing through optics and interacting with photoresists, Huang explained.

Computational lithography is the largest computational workload in chip design and manufacturing, consuming tens of billions of CPU hours annually. Massive data centers run 24/7 to create reticles for new chips.

Introduced in March, NVIDIA cuLitho is a software library with optimized tools and algorithms for GPU-accelerated computational lithography.

“We have already accelerated the processing by 50 times,” Huang said. “Tens of thousands of CPU servers can be replaced by a few hundred NVIDIA DGX systems, reducing power and cost by an order of magnitude.”

The savings will reduce carbon emissions or enable new algorithms to push beyond 2 nanometers, Huang said.

What’s Next?

What’s the next wave of AI? Huang described a new kind of AI —  “embodied AI,” or intelligent systems that can understand, reason about and interact with the physical world.

He said examples include robotics, autonomous vehicles and even chatbots that are smarter because they understand the physical world.

Huang offered his audience a look at NVIDIA VIMA, a multimodal embodied AI. VIMA, Huang said, can perform tasks from visual text prompts, such as “rearranging objects to match this scene.”

It can learn concepts and act accordingly, such as “This is a widget,” “That’s a thing” and then “Put this widget in that thing.” It can also learn from demonstrations and stay within specified boundaries, Huang said.

VIMA runs on NVIDIA AI, and its digital twin runs in NVIDIA Omniverse, a 3D development and simulation platform. Huang said that physics-informed AI could learn to emulate physics and make predictions that obey physical laws.

Researchers are building systems that mesh information from real and virtual worlds on a vast scale.

NVIDIA is building a digital twin of our planet, called Earth-2, which will first predict the weather, then long-range weather, and eventually climate. NVIDIA’s Earth-2 team has created FourCastNet, a physics-AI model that emulates global weather patterns 50-100,000x faster.

FourCastNet runs on NVIDIA AI, and the Earth-2 digital twin is built in NVIDIA Omniverse.

Such systems promise to address the greatest challenge of our time, such as the need for cheap, clean energy.

For example, researchers at the U.K.’s Atomic Energy Authority and the University of Manchester are creating a digital twin of their fusion reactor, using physics-AI to emulate plasma physics and robotics to control the reactions and sustain the burning plasma.

Huang said scientists could explore hypotheses by testing them in the digital twin before activating the physical reactor, improving energy yield, predictive maintenance and reducing downtime. “The reactor plasma physics-AI runs on NVIDIA AI, and its digital twin runs in NVIDIA Omniverse,“ Huang said.

Such systems hold promise for further advancements in the semiconductor industry. “I look forward to physics-AI, robotics and Omniverse-based digital twins helping to advance the future of chip manufacturing,” Huang said.

Read More