Cash In: ‘PAYDAY 3’ Streams on GeForce NOW

Cash In: ‘PAYDAY 3’ Streams on GeForce NOW

Time to get the gang back together — PAYDAY 3 streams on GeForce NOW this week.

It’s one of 11 titles joining the cloud this week, including Party Animals.

The Perfect Heist

PAYDAY 3 on GeForce NOW
Not pictured: the crew member in a fuzzy bunny mask. He stayed home.

PAYDAY 3 is the highly anticipated sequel to one of the world’s most popular co-op shooters. Step out of retirement and back into the life of crime in the shoes of the Payday Gang — who bring the envy of their peers and the nightmare of law enforcement wherever they go. Set several years after the end of the crew’s reign of terror over Washington, D.C., the game reassembles the group to deal with the threat that’s roused them out of early retirement.

Upgrade to a GeForce NOW Ultimate membership to pull off every heist at the highest quality. Ultimate members can stream on GeForce RTX 4080 rigs with support for up to 4K at 120 frames per second gameplay on PCs and Macs, providing a gaming experience so seamless that it would be a crime to stream on anything less.

Game On

Party Animals on GeForce NOW
Paw it out with friends on nearly any device.

There’s always more action every GFN Thursday. Here’s the full list of this week’s GeForce NOW library additions:

  • HumanitZ (New release on Steam, Sept. 18)
  • Party Animals (New release on Steam, Sept. 20)
  • PAYDAY 3 (New release on Steam, Epic Games Store, Xbox PC Game Pass, Sept. 21)
  • Warhaven (New release on Steam)
  • 911 Operator (Epic Games Store)
  • Ad Infinitum (Steam)
  • Chained Echoes (Xbox, available on PC Game Pass)
  • Deceit 2 (Steam)
  • The Legend of Tianding (Xbox, available on PC Game Pass)
  • Mechwarrior 5: Mercenaries (Xbox, available on PC Game Pass)
  • Sprawl (Steam)

Starting today, the Cyberpunk 2077 2.0 patch will also be supported, adding DLSS 3.5 technology and other new features.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

Virtually Incredible: Mercedes-Benz Prepares Its Digital Production System for Next-Gen Platform With NVIDIA Omniverse, MB.OS and Generative AI

Virtually Incredible: Mercedes-Benz Prepares Its Digital Production System for Next-Gen Platform With NVIDIA Omniverse, MB.OS and Generative AI

Mercedes-Benz is using digital twins for production with help from NVIDIA Omniverse, a platform for developing Universal Scene Description (OpenUSD) applications to design, collaborate, plan and operate manufacturing and assembly facilities.

Mercedes-Benz’s new production techniques will bring its next-generation vehicle portfolio into its manufacturing facilities operating in Rastatt, Germany; Kecskemét, Hungary; and Beijing, China — and offer a blueprint for its more than 30 factories worldwide. This “Digital First” approach enhances efficiency, avoids defects and saves time, marking a step-change in the flexibility, resilience and intelligence of the Mercedes-Benz MO360 production system.

The digital twin in production helps ensure Mercedes-Benz assembly lines can be retooled, configured and optimized in physically accurate simulations first. The new assembly lines in the Kecskemét plant will enable production of vehicles based on the newly launched Mercedes Modular Architecture that are developed virtually using digital twins in Omniverse.

By leveraging Omniverse, Mercedes-Benz can interact directly with its suppliers, reducing coordination processes by 50%. Using a digital twin in production doubles the speed for converting or constructing an assembly hall, while improving the quality of the processes, according to the automaker.

“Using NVIDIA Omniverse and AI, Mercedes-Benz is building a connected, digital-first approach to optimize its manufacturing processes, ultimately reducing construction time and production costs,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA, during a digital event held earlier today.

In addition, the introduction of AI opens up new areas of energy and cost savings. The Rastatt plant is being used to pioneer digital production in the paint shop. Mercedes-Benz used AI to monitor relevant sub-processes in the pilot testing, which led to energy savings of 20%.

Supporting State-of-the-Art Software Systems

Next-generation Mercedes-Benz vehicles will feature its new operating system “MB.OS,” which will be standard across its entire vehicle portfolio and deliver premium software capabilities and experiences across all vehicle domains.

Mercedes-Benz has partnered with NVIDIA to develop software-defined vehicles. Its fleets will be built on NVIDIA DRIVE Orin and DRIVE software, with intelligent driving capabilities tested and validated in the NVIDIA DRIVE Sim platform, which is also built on Omniverse.

The automaker’s MO360 production system will enable it to produce electric, hybrid and gas models on the same production lines and to scale the manufacturing of electric vehicles. The implementation of MB.OS in production will allow its cars to roll off assembly lines with the latest versions of vehicle software.

“Mercedes-Benz is initiating a new era of automotive manufacturing thanks to the integration of artificial intelligence, MB.OS and the digital twin based on NVIDIA Omniverse into the MO360 ecosystem,” said Jörg Burzer, member of the board of the Mercedes-Benz Group AG, Production, Quality and Supply Chain Management. “With our new ‘Digital First’ approach, we unlock efficiency potential even before the launch of our MMA models in our global production network and can accelerate the ramp-up significantly.”

Flexible Factories of the Future

Avoiding costly manufacturing production shutdowns is critical. Running simulations in NVIDIA Omniverse enables factory planners to optimize factory floor and production line layouts for supply routes, and production lines can be validated without having to disrupt production.

This virtual approach also enables efficient design of new lines and change management for existing lines while reducing downtime and helping improve product quality. For the world’s automakers, much is at stake across the entire software development stack, from chip to cloud.

Omniverse Collaboration for Efficiencies 

The Kecskemét plant is the first with a full digital twin of the entire factory. This virtual area enables development at the heart of assembly, between its tech and trim lines. And plans are for the new Kecskemét factory hall to launch into full production.

Collaboration in Omniverse has enabled plant suppliers and planners to interact with each other interactively in the virtual environment, so that layout options and automation changes can be incorporated and validated in real time. This accelerates how quickly new production lines can reach maximum capacity and reduces the risk of re-work or stoppages.

Virtual collaboration with digital twins can accelerate planning and implementation of projects by weeks, as well as translate to significant cost savings for launching new manufacturing lines.

Learn more about NVIDIA Omniverse and DRIVE Orin.

Read More

Oracle Cloud Infrastructure Offers New NVIDIA GPU-Accelerated Compute Instances

Oracle Cloud Infrastructure Offers New NVIDIA GPU-Accelerated Compute Instances

With generative AI and large language models (LLMs) driving groundbreaking innovations, the computational demands for training and inference are skyrocketing.

These modern-day generative AI applications demand full-stack accelerated compute, starting with state-of-the-art infrastructure that can handle massive workloads with speed and accuracy. To help meet this need, Oracle Cloud Infrastructure today announced general availability of NVIDIA H100 Tensor Core GPUs on OCI Compute, with NVIDIA L40S GPUs coming soon.

NVIDIA H100 Tensor Core GPU Instance on OCI

The OCI Compute bare-metal instances with NVIDIA H100 GPUs, powered by the NVIDIA Hopper architecture, enable an order-of-magnitude leap for large-scale AI and high-performance computing, with unprecedented performance, scalability and versatility for every workload.

Organizations using NVIDIA H100 GPUs obtain up to a 30x increase in AI inference performance and a 4x boost in AI training compared with tapping the NVIDIA A100 Tensor Core GPU. The H100 GPU is designed for resource-intensive computing tasks, including training LLMs and inference while running them.

The BM.GPU.H100.8 OCI Compute shape includes eight NVIDIA H100 GPUs, each with 80GB of HBM2 GPU memory. Between the eight GPUs, 3.2TB/s of bisectional bandwidth enables each GPU to communicate directly with all seven other GPUs via NVIDIA NVSwitch and NVLink 4.0 technology. The shape includes 16 local NVMe drives with a capacity of 3.84TB each and also includes 4th Gen Intel Xeon CPU processors with 112 cores, as well as 2TB of system memory.

In a nutshell, this shape is optimized for organizations’ most challenging workloads.

Depending on timelines and sizes of workloads, OCI Supercluster allows organizations to scale their NVIDIA H100 GPU usage from a single node to up to tens of thousands of H100 GPUs over a high-performance, ultra-low-latency network.

NVIDIA L40S GPU Instance on OCI

The NVIDIA L40S GPU, based on the NVIDIA Ada Lovelace architecture, is a universal GPU for the data center, delivering breakthrough multi-workload acceleration for LLM inference and training, visual computing and video applications. The OCI Compute bare-metal instances with NVIDIA L40S GPUs will be available for early access later this year, with general availability coming early in 2024.

These instances will offer an alternative to the NVIDIA H100 and A100 GPU instances for tackling smaller- to medium-sized AI workloads, as well as for graphics and video compute tasks. The NVIDIA L40S GPU achieves up to a 20% performance boost for generative AI workloads and as much as a 70% improvement in fine-tuning AI models compared with the NVIDIA A100.

The BM.GPU.L40S.4 OCI Compute shape includes four NVIDIA L40S GPUs, along with the latest-generation Intel Xeon CPU with up to 112 cores, 1TB of system memory, 15.36TB of low-latency NVMe local storage for caching data and 400GB/s of cluster network bandwidth. This instance was created to tackle a wide range of use cases, ranging from LLM training, fine-tuning and inference to NVIDIA Omniverse workloads and industrial digitalization, 3D graphics and rendering, video transcoding and FP32 HPC.

NVIDIA and OCI: Enterprise AI

This collaboration between OCI and NVIDIA will enable organizations of all sizes to join the generative AI revolution by providing them with state-of-the-art NVIDIA H100 and L40S GPU-accelerated infrastructure.

Access to NVIDIA GPU-accelerated instances may not be enough, however. Unlocking the maximum potential of NVIDIA GPUs on OCI Compute means having an optimal software layer. NVIDIA AI Enterprise streamlines the development and deployment of enterprise-grade accelerated AI software with open-source containers and frameworks optimized for the underlying NVIDIA GPU infrastructure, all with the help of support services.

To learn more, join NVIDIA at Oracle Cloud World in the AI Pavillion, attend this session on the new OCI instances on Wednesday, Sept. 20, and visit these web pages on Oracle Cloud Infrastructure, OCI Compute, how Oracle approaches AI and the NVIDIA AI Platform.

Read More

Meet the Omnivore: Industrial Designer Blends Art and OpenUSD to Create 3D Assets for AI Training

Meet the Omnivore: Industrial Designer Blends Art and OpenUSD to Create 3D Assets for AI Training

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse and OpenUSD to accelerate their 3D workflows and create virtual worlds.

As a student at the Queensland University of Technology (QUT) in Australia, Emily Boehmer was torn between pursuing the creative arts or science.

And then she discovered industrial design, which allowed her to dive into research and coding while exploring visualization workflows like sketching, animation and 3D modeling.

Now, Boehmer is putting her skills to practice as a design intern at BMW Group’s Technology Office in Munich. The team uses NVIDIA Omniverse, a platform for developing and connecting 3D tools and applications, and Universal Scene Description — aka OpenUSD — to enhance its synthetic data generation pipelines.

Boehmer creates realistic 3D assets that can be used with SORDI.ai, short for Synthetic Object Recognition Dataset for Industries. Published by BMW Group, Microsoft and NVIDIA, SORDI.ai helps developers and researchers streamline and accelerate the training of AI for production. To automate image generation, the team developed an extension based on Omniverse Replicator, a software development kit for creating custom synthetic data generation tools.

As part of the SORDI.ai team, Boehmer uses Blender and Adobe Substance Painter to design 3D assets with high levels of physical accuracy and photorealism, helping ensure that synthetic data can be used to efficiently train AI models.

All the assets Boehmer creates are used to test and simulate autonomous robots on the NVIDIA Isaac Sim platform, which provides developers a suite of synthetic data generation capabilities that can power photorealistic, physically accurate virtual environments.

Creating Realistic 3D Assets for Training AI 

As a design intern, Boehmer’s main tasks are animation and 3D modeling. The process starts with taking photos of target objects. Then, she uses the 2D photos as references by lining them up with the 3D models in Blender.

3D objects can consist of thousands of polygons, so Boehmer creates two versions of the asset — one with a low number of polygons and one with a higher polygon count. The details of the high-poly version can be baked onto the low-poly model, helping maintain more details so the asset looks realistic.

Once the 3D assets are created, Boehmer uses the models to start assembling scenes. Her favorite aspect of the Omniverse platform is the flexibility of USD, because it allows her to easily make changes to 3D models.

USD workflows have enabled the BMW Group’s design teams to create many different scenes using the same components, as they can easily access all the USD files stored on Omniverse Nucleus. When creating portions of a scene, Boehmer pulls from dozens of USD models from SORDI.ai and adds them into scenes that will be used by other designers to assemble larger factory scenes.

Boehmer only has to update the USD file of the original asset to automatically apply changes to all reference files containing it.

“It’s great to see USD support for both Blender and Substance Painter,” she said. “When I create 3D assets using USD, I can be confident that they’ll look and behave as expected in the scenes they’ll be placed in.”

Emily Boehmer’s creative process starts with photographing the object, then using that image as a reference to build and texture 3D models.

Building Factory Scenes With Synthetic Data

The Isaac Sim platform is a key part of the SORDI.ai team’s workflow. It’s used to develop pipelines that use generative AI and procedural algorithms for 3D scene generation. The team also developed an extension based on Omniverse Replicator that automates randomization within a scene when generating synthetic images.

“The role of design interns like me is to realistically model and texture the assets used for scenes built in Isaac Sim,” Boehmer said. “The more realistic the assets are, the more realistic the synthetic images can be and the more effective they are for training AI models for real scenarios.”

Data annotation — the process of labeling data like images, text, audio or video with relevant tags — makes it easier for AI to understand the data, but the manual process can be incredibly time-consuming, especially for large quantities of content. SORDI.ai addresses these challenges by using synthetic data to train AI.

When importing assets into Omniverse and creating USD versions of the files, Boehmer tags them with the appropriate data label. Once these assets have been put together in a scene, she can use Omniverse Replicator to generate images that are automatically annotated using the original labels.

And using SORDI.ai, designers can set up scenes and generate thousands of annotated images with just one click.

Boehmer will be a guest on an Omniverse livestream on Wednesday, Sept. 20, where she’ll demonstrate how she uses Blender and Substance Painter in Omniverse for synthetic image generation pipelines.

Join In on the Creation

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Creators and developers can download Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Check out artwork from other “Omnivores” and submit projects in the gallery. See how creators are using OpenUSD to accelerate a variety of 3D workflows in the latest OpenUSD All Stars. And connect workflows to Omniverse with software from Adobe, Autodesk, Blender, Epic Games, Reallusion and more.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Developers can get started with Omniverse resources and learn about OpenUSD. Explore the growing ecosystem of 3D tools connected to Omniverse.

Stay up to date on the platform by subscribing to the newsletter, and follow NVIDIA Omniverse on Instagram, Medium and Twitter. For more, join the Omniverse community and check out the Omniverse forums, Discord server, Twitch and YouTube channels.

Read More

Ray Shines with NVIDIA AI: Anyscale Collaboration to Help Developers Build, Tune, Train and Scale Production LLMs

Ray Shines with NVIDIA AI: Anyscale Collaboration to Help Developers Build, Tune, Train and Scale Production LLMs

Large language model development is about to reach supersonic speed thanks to a collaboration between NVIDIA and Anyscale.

At its annual Ray Summit developers conference, Anyscale — the company behind the fast growing open-source unified compute framework for scalable computing —  announced today that it is bringing NVIDIA AI to Ray open source and the Anyscale Platform. It will also be integrated into Anyscale Endpoints, a new service announced today that makes it easy for application developers to cost-effectively embed LLMs in their applications using the most popular open source models.

These integrations can dramatically speed generative AI development and efficiency while boosting security for production AI, from proprietary LLMs to open models such as Code Llama, Falcon, Llama 2, SDXL and more.

Developers will have the flexibility to deploy open-source NVIDIA software with Ray or opt for NVIDIA AI Enterprise software running on the Anyscale Platform for a fully supported and secure production deployment.

Ray and the Anyscale Platform are widely used by developers building advanced LLMs for generative AI applications capable of powering ‌intelligent chatbots, coding copilots and powerful search and summarization tools.

NVIDIA and Anyscale Deliver Speed, Savings and Efficiency

Generative AI applications are captivating the attention of businesses around the globe. Fine-tuning, augmenting and running LLMs requires significant investment and expertise. Together, NVIDIA and Anyscale can help reduce costs and complexity for generative AI development and deployment with a number of application integrations.

NVIDIA TensorRT-LLM, new open-source software announced last week, will support Anyscale offerings to supercharge LLM performance and efficiency to deliver cost savings. Also supported in the NVIDIA AI Enterprise software platform, Tensor-RT LLM automatically scales inference to run models in parallel over multiple GPUs, which can provide up to 8x higher performance when running on NVIDIA H100 Tensor Core GPUs, compared to prior-generation GPUs.

TensorRT-LLM automatically scales inference to run models in parallel over multiple GPUs and includes custom GPU kernels and optimizations for a wide range of popular LLM models. It also implements the new FP8 numerical format available in the NVIDIA H100 Tensor Core GPU Transformer Engine and offers an easy-to-use and customizable Python interface.

NVIDIA Triton Inference Server software supports inference across cloud, data center, edge and embedded devices on GPUs, CPUs and other processors. Its integration can enable Ray developers to boost efficiency when deploying AI models from multiple deep learning and machine learning frameworks, including TensorRT, TensorFlow, PyTorch, ONNX, OpenVINO, Python, RAPIDS XGBoost and more.

With the NVIDIA NeMo framework, Ray users will be able to easily fine-tune and customize LLMs with business data,  paving the way for LLMs that understand the unique offerings of individual businesses.

NeMo is an end-to-end, cloud-native framework to build, customize and deploy generative AI models anywhere. It features training and inferencing frameworks, guardrailing toolkits, data curation tools and pretrained models, offering enterprises an easy, cost-effective and fast way to adopt generative AI.

Options for Open-Source or Fully Supported Production AI 

Ray open source and the Anyscale Platform enable developers to effortlessly move from open source to deploying production AI at scale in the cloud.

The Anyscale Platform provides fully managed, enterprise-ready unified computing that makes it easy to build, deploy and manage scalable AI and Python applications using Ray, helping customers bring AI products to market faster at significantly lower cost.

Whether developers use Ray open source or the supported Anyscale Platform, Anyscale’s core functionality helps them easily orchestrate LLM workloads. The NVIDIA AI integration can help developers build, train, tune and scale AI with even greater efficiency.

Ray and the Anyscale Platform run on accelerated computing from leading clouds, with the option to run on hybrid or multi-cloud computing. This helps developers easily scale up as they need more computing to power a successful LLM deployment.

The collaboration will also enable developers to begin building models on their workstations through NVIDIA AI Workbench and scale them easily across hybrid or multi-cloud accelerated computing once it’s time to move to production.

NVIDIA AI integrations with Anyscale are in development and expected to be available by the end of the year.

Developers can sign up to get the latest news on this integration as well as a free 90-day evaluation of NVIDIA AI Enterprise.

To learn more, attend the Ray Summit in San Francisco this week or watch the demo video below.

See this notice regarding NVIDIA’s software roadmap.

Read More

Shout at the Devil: Capcom’s ‘Devil May Cry 5’ Joins GeForce NOW

Shout at the Devil: Capcom’s ‘Devil May Cry 5’ Joins GeForce NOW

GFN Thursday is downright demonic, as Devil May Cry 5 comes to GeForce NOW.

Capcom’s action-packed third-person brawler leads 15 titles joining the GeForce NOW library this week, including Gears Tactics and The Crew Motorfest.

It’s also the last week to take on the Ultimate KovaaK’s Challenge. Get on the leaderboard today for a chance to win a 240Hz gaming monitor, a gaming Chromebook, GeForce NOW memberships or other prizes. The challenge ends on Thursday, Sept. 21.

The Devil Returns

Devil May Cry 5 on GeForce NOW
Jackpot!

Devil May Cry 5 is the next title from Capcom’s catalog to come to GeForce NOW. Members can stream all of its high-octane, stylish action at GeForce RTX quality to nearly any device, thanks to the power of GeForce NOW cloud gaming servers.

The threat of demonic power has returned to menace the world once again. Take on hordes of enemies as Nero, V or the legendary Dante with the ramped-up sword-and-gun gameplay that the series is known for. Battle epic bosses in adrenaline-fueled fights across the overrun Red Grave City — all to the beat of a truly killer soundtrack.

Take the action on the go thanks to the power of the cloud. GeForce NOW Priority members can take the fight with them across nearly any device at up to 1080p and 60 frames per second.

Kickin’ It Into High Gear

Gears Tactics on GeForce NOW
A squad of survivors is all it takes to stop the Locust threat.

Rise up and fight, members. Gears Tactics is the next PC Game Pass title to arrive in the cloud.

Gears Tactics is a fast-paced, turn-based strategy game from one of the most acclaimed video game franchises — Gears of War. Set a dozen years before the first Gears of War game, the Gears Tactics story opens as cities on the planet Sera begin falling to the monstrous threat rising from underground: the Locust Horde. With the government in disarray, a squad of survivors emerges as humanity’s last hope. Play as the defiant soldier Gabe Diaz to recruit, develop and command squads on a desperate mission to hunt down the relentless and powerful leader of the Locust army, Ukkon, the group’s monster-making mastermind.

Fight for survival and outsmart the enemy with the sharpness of 4K resolution streaming from the cloud with a GeForce NOW Ultimate membership.

Hit the Road, Jack

The Crew Motorfest on GeForce NOW
The best way to see Hawaii is by car, at 100 mph.

The Crew Motorfest also comes to GeForce NOW this week. The latest entry in Ubisoft’s racing franchise drops drivers into the open roads of Oahu, Hawaii. Get behind the wheel of 600+ iconic vehicles from the past, present and future, including sleek sports cars, rugged off-road vehicles and high-performance racing machines. Race alone or with friends through the bustling city of Honolulu, test off-roading skills on the ashy slopes of a volcano or kick back on the sunny beaches behind the wheel of a buggy.

Members can take a test drive from Sept. 14-17 with a five-hour free trial. Explore the vibrant Hawaiian open world, participate in thrilling driving activities and collect prestigious cars, with all progress carrying over to the full game purchase.

Take the pole position with a GeForce NOW Ultimate membership to stream The Crew Motorfest and more than 1,600 other titles at the highest frame rates. Upgrade today.

A New Challenge

Gunbrella on GeForce NOW
Rain, rain, go away. The umbrella is also a gun today.

With GeForce NOW, there’s always something new to play. Here’s what’s hitting the playlist this week:

  • Tavernacle! (New release on Steam, Sept. 11)
  • Gunbrella (New release on Steam, Sept. 13)
  • The Crew Motorfest (New release on Ubisoft Connect, Sept. 14)
  • Amnesia: The Bunker (Xbox, available on PC Game Pass)
  • Descenders (Xbox, available on PC Game Pass)
  • Devil May Cry 5 (Steam)
  • Gears Tactics (Steam and Xbox, available on PC Game Pass)
  • Last Call BBS (Xbox)
  • The Matchless Kungfu (Steam)
  • Mega City Police (Steam)
  • Opus Magnum (Xbox)
  • Remnant II (Epic Games Store)
  • Space Hulk: Deathwing – Enhanced Edition (Xbox)
  • Superhot (Xbox)
  • Vampyr (Xbox)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges

Unlocking the Language of Genomes and Climates: Anima Anandkumar on Using Generative AI to Tackle Global Challenges

Generative AI-based models can not only learn and understand natural languages — they can learn the very language of nature itself, presenting new possibilities for scientific research.

Anima Anandkumar, Bren Professor at Caltech and senior director of AI research at NVIDIA, was recently invited to speak at the President’s Council of Advisors on Science and Technology.

At the talk, Anandkumar says that generative AI was described as “an inflection point in our lives,” with discussions swirling around how to “harness it to benefit society and humanity through scientific applications.”

On the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Anandkumar on generative AI’s potential to make splashes in the scientific community.

It can, for example, be fed DNA, RNA, viral and bacterial data to craft a model that understands the language of genomes. That model can help predict dangerous coronavirus variants to accelerate drug and vaccine research.

Generative AI can also predict extreme weather events like hurricanes or heat waves. Even with an AI boost, trying to predict natural events is challenging because of the sheer number of variables and unknowns.

“Those are the aspects we’re working on at NVIDIA and Caltech, in collaboration with many other organizations, to say, ‘How do we capture the multitude of scales present in the natural world?’” she said. “With the limited data we have, can we hope to extrapolate to finer scales? Can we hope to embed the right constraints and come up with physically valid predictions that make a big impact?”

Anandkumar adds that to ensure AI models are responsibly and safely used, existing laws must be strengthened to prevent dangerous downstream applications.

She also talks about the AI boom, which is transforming the role of humans across industries, and problems yet to be solved.

“This is the research advice I give to everyone: the most important thing is the question, not the answer,” she said.

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

NVIDIA Lends Support to Washington’s Efforts to Ensure AI Safety

NVIDIA Lends Support to Washington’s Efforts to Ensure AI Safety

In an event at the White House today, NVIDIA announced support for voluntary commitments that the Biden Administration developed to ensure advanced AI systems are safe, secure and trustworthy.

The news came the same day NVIDIA’s chief scientist, Bill Dally, testified before a U.S. Senate subcommittee seeking input on potential legislation covering generative AI. Separately, NVIDIA founder and CEO Jensen Huang will join other industry leaders in a closed-door meeting on AI Wednesday with the full Senate.

Seven companies including Adobe, IBM, Palantir and Salesforce joined NVIDIA in supporting the eight agreements the Biden-Harris administration released in July with support from Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI.

The commitments are designed to advance common standards and best practices to ensure the safety of generative AI systems until regulations are in place, the White House said. They include:

  • Testing the safety and capabilities of AI products before they’re deployed,
  • Safeguarding AI models against cyber and insider threats, and
  • Using AI to help meet society’s greatest challenges, from cancer to climate change.

Dally Shares NVIDIA’s Experience

In his testimony, Dally told the Senate subcommittee that government and industry should balance encouraging innovation in AI with ensuring models are deployed responsibly.

The subcommittee’s hearing, “Oversight of AI: Rules for Artificial Intelligence,” is among actions from policymakers around the world trying to identify and address potential risks of generative AI.

Earlier this year, the subcommittee heard testimonies from leaders of Anthropic, IBM and OpenAI, as well as academics such as Yoshua Bengio, a University of Montreal professor considered one of the godfathers of AI.

Dally, who leads a global team of more than 300 at NVIDIA Research, shared the witness table on Tuesday with Brad Smith, Microsoft’s president and vice chair. Dally’s testimony briefly encapsulated NVIDIA’s unique role in the evolution of AI over the last two decades.

How Accelerated Computing Sparked AI

He described how NVIDIA invented the GPU in 1999 as a graphics processing unit, then fit it for a broader role in parallel processing in 2006 with the CUDA programming software. Over time, developers across diverse scientific and technical computing fields found this new form of accelerated computing could significantly advance their work.

Along the way, researchers discovered GPUs also were a natural fit for AI’s neural networks, because they require massive parallel processing.

In 2012, the AlexNet model, trained on two NVIDIA GPUs, demonstrated human-like capabilities in image recognition. That result helped spark a decade of rapid advances using GPUs, leading to ChatGPT and other generative AI models used by hundreds of millions worldwide.

Today, accelerated computing and generative AI are showing the potential to transform industries, address global challenges and profoundly benefit society, said Dally, who chaired Stanford University’s computer science department before joining NVIDIA.

AI’s Potential and Limits

In written testimony, Dally provided examples of how AI is empowering professionals to do their jobs better than they might have imagined in fields as diverse as business, healthcare and climate science.

Like any technology, AI products and services have risks and are subject to existing laws and regulations that aim to mitigate those risks.

Industry also has a role to play in deploying AI responsibly. Developers set limits for AI models when they train them and define their outputs.

Dally noted that NVIDIA released in April NeMo Guardrails, open-source software developers can use to guide generative AI applications in producing accurate, appropriate and secure text responses. He said that NVIDIA also maintains internal risk-management guidelines for AI models.

Eyes on the Horizon

Making sure that new and exceptionally large AI models are accurate and safe is a natural role for regulators, Dally suggested.

Picture of Sen Blumenthal welcoming Dally to the hearing
Subcommittee chair Sen. Richard Blumenthal (D-CT) welcomed Dally to the hearing.

He said that these “frontier” models are being developed at a gigantic scale. They exceed the capabilities of ChatGPT and other existing models that have already been well-explored by developers and users.

Dally urged the subcommittee to balance thoughtful regulation with the need to encourage innovation in an AI developer community that includes thousands of startups, researchers and enterprises worldwide. AI tools should be widely available to ensure a level playing field, he said.

During questioning, Senator Amy Klobuchar (D-MN) asked Dally why NVIDIA announced in March it’s working with Getty Images.

“At NVIDIA, we believe in respecting people’s intellectual property rights,” Dally replied. “We partnered with Getty to train large language models with a service called Picasso, so people who provided the original content got remunerated.”

In closing, Dally reaffirmed NVIDIA’s dedication to innovating generative AI and accelerated computing in ways that serve the best interests of all.

Read More

Mobility Gets Amped: IAA Show Floor Energized by Surge in EV Reveals, Generative AI

Mobility Gets Amped: IAA Show Floor Energized by Surge in EV Reveals, Generative AI

Generative AI’s transformative effect on the auto industry took center stage last week at the International Motor Show Germany, known as IAA, in Munich.

NVIDIA’s Danny Shapiro, VP of automotive marketing, explained in his IAA keynote how this driving force is accelerating innovation and streamlining processes — from advancing design, engineering and digital-twin deployment for optimizing manufacturing…to accelerating AV development with simulation…to enhancing retail experiences.

The gen AI message was also shared just ahead of the show in a fireside chat at NVIDIA headquarters with NVIDIA VP of Automotive Ali Kani and Aakash Arora, managing director and partner at Boston Consulting Group, who discussed the rapid pace of innovation, and how genAI will improve in-car experiences and transform the way vehicles are designed, manufactured and sold.

Electric Vehicles Dominate the Show Floor 

The auto industry’s move toward electrification was on full display at IAA, with a number of global automakers showcasing their current and upcoming electric mobility lineup.

Mercedes-Benz took the wraps off its Concept CLA Class, giving visitors insight into the brand’s future vision for the entry-level segment.

Designed on the upcoming Mercedes-Benz Modular Architecture (MMA) platform, the exterior of the Concept CLA Class teases an iconic design and evokes dynamic performance. Its interior provides the ultimate customer experience with exceptional comfort and convenience.

The combination of high performance, sustainability, safety and comfort paired with an outstanding digital experience will help Mercedes-Benz realize its Ambition 2039 vision to be net carbon neutral across its entire fleet of new vehicles by the end of the next decade.

As the first car to be developed on the MMA platform, the Concept CLA Class paves the way for next-gen electric-drive technology, and features Mercedes-Benz’s new operating system, MB.OS, with automated driving capabilities powered by NVIDIA DRIVE. With an anticipated range of more than 466 miles, the CLA Class has an 800V electric architecture to maximize efficiency and performance and rapid charging. Configured for a sporty, rear-wheel drive, its modular design will also be scalable for other vehicle segments.

Lotus conducted test drives at IAA of its Lotus Eletre Hyper-SUV, which features an immersive digital cockpit, a battery range of up to 370 miles and autonomous-driving capabilities powered by the NVIDIA DRIVE Orin system-on-a-chip. With DRIVE at the wheel, the all-electric car offers server-level computing power that can be continuously enhanced during the car’s lifetime through over-the-air updates.

Lotus Eletre Hyper-SUV. Image courtesy of Lotus.

U.S.-based Lucid Motors premiered during IAA its limited-production Lucid Air Midnight Dream Edition electric sedan, which provides up to 496 miles of range. The sedan was created with the European market in mind.

The automaker also showcased other models, including its Lucid Air Pure, Air Touring and Air Grand Touring, which come with the DreamDrive Pro advanced driver-assistance system (ADAS) powered by the high-performance compute of NVIDIA DRIVE for a seamless automated driving experience.

Lucid Air Midnight Dream. Image courtesy of Lucid Motors.

China’s emerging EV makers — which have been quick to embrace the shift to electric powertrains and software-defined strategies — were also in force at IAA as they set their sights on the European market.

Auto giant BYD presented a diverse lineup of five EVs targeting the European market, along with the seven-seater DENZA D9 MPV, or multi-purpose vehicle, which features significant safety, performance and convenience options for drivers and passengers. DENZA is a joint venture brand between BYD and Mercedes-Benz.

The eco-friendly EVs demonstrate the latest in next-gen electric technology and underscore BYD’s position as a leading global car brand.

BYD booth at IAA. Image courtesy of BYD.

LeapMotor unveiled its new model, the C10 SUV, built on its LEAP 3.0 architecture. The vehicle is equipped with 30 high-resolution sensors, including lidar and 8-megapixel high-definition cameras, for accurate surround-perception capabilities. It’s powered by NVIDIA DRIVE Orin, which delivers 254 TOPS of compute to enable safe, high-speed and urban intelligent-driving capabilities.

LeapMotor C10 SUV. Image courtesy of LeapMotor.

XPENG’s inaugural presence at IAA served as the ideal opportunity to introduce its latest models to Europe, including its G9 and P7 EVs, with NVIDIA DRIVE Orin under the hood. Deliveries of the P7 recently commenced, with the vehicles now available in Norway, Sweden, Denmark and the Netherlands. The automaker’s intelligent G6 Coupe SUV, also powered by NVIDIA DRIVE Orin, will be made available to the European market next year.

XPENG G9 and P7. Image courtesy of XPENG.

Ecosystem Partners Paint IAA Show Floor Green

In addition to automakers, NVIDIA ecosystem partners at IAA showcased their latest innovations and developments in the mobility space:

  • DeepRoute.ai showed its Driver 3.0 HD Map-Free solution built on NVIDIA DRIVE Orin and designed to offer a non-geofenced solution for mass-produced ADAS vehicles. The company plans to bring this NVIDIA-powered solution to the European market and expand beyond later next year.
  • DeepScenario showed how it’s using NVIDIA hardware for training and inference on its AI models.
  • dRISK, an NVIDIA DRIVE Sim ecosystem member, demonstrated its full-stack solution for training, testing and validating on level 2-level 5 ADAS/AV/ADS software, preparing autonomy to handle regulatory requirements and the full complexity of the real world for the next generation of highly effective and commercially viable autonomous solutions.
  • NODAR introduced GridDetect, its latest 3D vision product for level 3 driving. Using off-the-shelf cameras and NVIDIA DRIVE Orin, NODAR’s latest system provides high-resolution, real-time 3D sensing at up to 1,000m and can detect objects as small as 10cm at 150m. GridDetect also provides a comprehensive bird’s-eye view of objects in all conditions — including in challenging scenarios like nighttime, adverse weather and severe fog.
  • SafeAD demonstrated its perception technology for mapless driving, fleet map updates and validation processes.
NODAR GridDetect system for high-resolution, real-time 3D sensing. Image courtesy of NODAR.

Read More

A Quantum Boost: cuQuantum With PennyLane Lets Simulations Ride Supercomputers

A Quantum Boost: cuQuantum With PennyLane Lets Simulations Ride Supercomputers

Ten miles in from Long Island’s Atlantic coast, Shinjae Yoo is revving his engine.

The computational scientist and machine learning group lead at the U.S. Department of Energy’s Brookhaven National Laboratory is one of many researchers gearing up to run quantum computing simulations on a supercomputer for the first time, thanks to new software.

Yoo’s engine, the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), is using the latest version of PennyLane, a quantum programming framework from Toronto-based Xanadu. The open-source software, which builds on the NVIDIA cuQuantum software development kit, lets simulations run on high-performance clusters of NVIDIA GPUs.

The performance is key because researchers like Yoo need to process ocean-size datasets. He’ll run his programs across as many as 256 NVIDIA A100 Tensor Core GPUs on Perlmutter to simulate about three dozen qubits — the powerful calculators quantum computers use.

That’s about twice the number of qubits most researchers can model these days.

Powerful, Yet Easy to Use

The so-called multi-node version of PennyLane, used in tandem with the NVIDIA cuQuantum SDK, simplifies the complex job of accelerating massive simulations of quantum systems.

“This opens the door to letting even my interns run some of the largest simulations — that’s why I’m so excited,” said Yoo, whose team has six projects using PennyLane in the pipeline.

Pic of Brookhaven’s Shinjae Yoo prepares to scale up his quantum work on the Perlmutter supercomputer.
Brookhaven’s Shinjae Yoo prepares to scale up his quantum work on the Perlmutter supercomputer.

His work aims to advance high-energy physics and machine learning. Other researchers use quantum simulations to take chemistry and materials science to new levels.

Quantum computing is alive in corporate R&D centers, too.

For example, Xanadu is helping companies like Rolls-Royce develop quantum algorithms to design state-of-the-art jet engines for sustainable aviation and Volkswagen Group invent more powerful batteries for electric cars.

Four More Projects on Perlmutter

Meanwhile, at NERSC, at least four other projects are in the works this year using multi-node Pennylane, according to Katherine Klymko, who leads the quantum computing program there. They include efforts from NASA Ames and the University of Alabama.

“Researchers in my field of chemistry want to study molecular complexes too large for classical computers to handle,” she said. “Tools like Pennylane let them extend what they can currently do classically to prepare for eventually running algorithms on large-scale quantum computers.”

Blending AI, Quantum Concepts

PennyLane is the product of a novel idea. It adapts popular deep learning techniques like backpropagation and tools like PyTorch to programming quantum computers.

Xanadu designed the code to run across as many types of quantum computers as possible, so the software got traction in the quantum community soon after its introduction in a 2018 paper.

“There was engagement with our content, making cutting-edge research accessible, and people got excited,” recalled Josh Izaac, director of product at Xanadu and a quantum physicist who was an author of the paper and a developer of PennyLane.

Calls for More Qubits

A common comment on the PennyLane forum these days is, “I want more qubits,” said Lee J. O’Riordan, a senior quantum software developer at Xanadu, responsible for PennyLane’s performance.

“When we started work in 2022 with cuQuantum on a single GPU, we got 10x speedups pretty much across the board … we hope to scale by the end of the year to 1,000 nodes — that’s 4,000 GPUs — and that could mean simulating more than 40 qubits,” O’Riordan said.

Scientists are still formulating the questions they’ll address with that performance — the kind of problem they like to have.

Companies designing quantum computers will use the boost to test ideas for building better systems. Their work feeds a virtuous circle, enabling new software features in PennyLane that, in turn, enable more system performance.

Scaling Well With GPUs

O’Riordan saw early on that GPUs were the best vehicle for scaling PennyLane’s performance. He co-authored last year a paper on a method for splitting a quantum program across more than 100 GPUs to simulate more than 60 qubits, split into many 30 qubit sub-circuits.

Picture of Lee J. O’Riordan, PennyLane developer at Xanadu
Lee J. O’Riordan

“We wanted to extend our work to even larger workloads, so when we heard NVIDIA was adding multi-node capability to cuQuantum, we wanted to support it as soon as possible,” he said.

Within four months, multi-node PennyLane was born.

“For a big, distributed GPU project, that was a great turnaround time. Everyone working on cuQuantum helped make the integration as easy as possible,” O’Riordan said.

The team is still collecting data, but so far on “sample-based workloads, we see almost linear scaling,” he said.

Or, as NVIDIA founder and CEO Jensen Huang might say, “The more you buy, the more you save.”

Read More