EV Technology Goes into Hyperdrive with Mercedes-Benz EQS

Mercedes-Benz is calling on its long heritage of luxury to accelerate electric vehicle technology with the new EQS sedan.

The premium automaker lifted the wraps off the long-awaited flagship EV during a digital event today. The focal point of the revolutionary vehicle is the MBUX Hyperscreen, a truly intuitive and personalized AI cockpit, powered by NVIDIA.

The EQS is the first Mercedes-Benz to feature the “one bow” design, resembling a high-speed bullet train to increase efficiency as well as provide a quiet, comfortable interior experience.

The cabin is further transformed by the MBUX Hyperscreen — a single, 55-inch surface extending from the cockpit to the passenger seat. It delivers both safety and convenience by displaying all necessary functions at once.

Like the MBUX system recently unveiled with the new Mercedes-Benz S-Class, this extended-screen system runs on the high-performance, energy-efficient NVIDIA DRIVE platform for instantaneous AI processing and sharp graphics.

“The EQS is high tech in a true luxury shell,” said Ola Källenius, chairman of the Mercedes-Benz Board of Management.

With NVIDIA’s high-performance, energy-efficient compute, Mercedes-Benz was able to consolidate the varied and distributed cockpit components into one AI platform — with three separate screens under one glass surface — to simplify the architecture while creating more space to add new features.

Intelligence in Many Flavors

The MBUX Hyperscreen makes it easy to focus on the road ahead, yet delivers beautiful graphics for when attention to driving isn’t necessary.

Leveraging a “zero layer” design concept, the display features 90 percent of functions drivers and passengers need right on the surface, reducing the driver’s reliance on buttons or voice commands. An augmented reality heads-up display provides clear, 3D, turn-by-turn navigation, keeping drivers focused.

The deep neural networks powering the system process datasets such as vehicle position, cabin temperature and time of day to prioritize certain features — like entertainment or points of interest recommendations — while always keeping navigation at the center of the display.

The EQS will be capable of level 3 automated driving with Mercedes-Benz DRIVE PILOT. For times when the driver’s attention doesn’t need to be on the road, the MBUX Hyperscreen provides crystal-clear graphics as well as an intelligent voice assistant for the utmost convenience.

The map feature allows drivers to view their route in 3D, down to the tiniest detail. It can also factor battery capacity, weather conditions and topography into route planning, suggesting charging points along the way if needed. Front-seat passengers also get a dedicated screen for entertainment and ride information that doesn’t interfere with the driver’s display. It also enables the front seat passenger to share content with others in the car.

“The MBUX Hyperscreen surprises with intelligence in many flavors,” said Sajjad Khan, executive vice president at Mercedes-Benz.

And with high-performance NVIDIA compute at MBUX Hyperscreen’s core, users can seamlessly experience these flavors, toggling between features without experiencing any lag or delay.

Ahead of the Curve

Equipped with the most powerful battery in the industry, providing an estimated 478 miles of range and 516 horsepower, the EQS was designed to lead its class in every metric.

The sedan’s sleek design optimizes aerodynamics for lightning-quick acceleration — it can bolt from 0 to 60 mph in 4 seconds — while maintaining battery efficiency and reducing cabin noise.

Taking cues from its internal combustion engine sibling, the Mercedes-Benz S-Class, the EQS boasts the largest interior of any electric sedan on the market. The vehicle can recognize the driver either using facial recognition or fingerprint scanner, and adjust seating and climate settings to personal preferences. It also features customizable ambient lighting for whatever the mood.

The EQS is slated to arrive at U.S. dealerships this summer, ushering in a new generation of intelligent, electric luxury vehicles.

The post EV Technology Goes into Hyperdrive with Mercedes-Benz EQS appeared first on The Official NVIDIA Blog.

Read More

Knight Rider Rides a GAN: Bringing KITT to Life with AI, NVIDIA Omniverse

Fasten your seatbelts. NVIDIA Research is revving up a new deep learning engine that creates 3D object models from standard 2D images — and can bring iconic cars like the Knight Rider’s AI-powered KITT to life — in NVIDIA Omniverse.

Developed by the NVIDIA AI Research Lab in Toronto, the GANverse3D application inflates flat images into realistic 3D models that can be visualized and controlled in virtual environments. This capability could help architects, creators, game developers and designers easily add new objects to their mockups without needing expertise in 3D modeling, or a large budget to spend on renderings.

A single photo of a car, for example, could be turned into a 3D model that can drive around a virtual scene, complete with realistic headlights, tail lights and blinkers.

To generate a dataset for training, the researchers harnessed a generative adversarial network, or GAN, to synthesize images depicting the same object from multiple viewpoints — like a photographer who walks around a parked vehicle, taking shots from different angles. These multi-view images were plugged into a rendering framework for inverse graphics, the process of inferring 3D mesh models from 2D images.

Once trained on multi-view images, GANverse3D needs only a single 2D image to predict a 3D mesh model. This model can be used with a 3D neural renderer that gives developers control to customize objects and swap out backgrounds.

When imported as an extension in the NVIDIA Omniverse platform and run on NVIDIA RTX GPUs, GANverse3D can be used to recreate any 2D image into 3D — like the beloved crime-fighting car KITT, from the popular 1980s Knight Rider TV show.

Previous models for inverse graphics have relied on 3D shapes as training data.

Instead, with no aid from 3D assets, “We turned a GAN model into a very efficient data generator so we can create 3D objects from any 2D image on the web,” said Wenzheng Chen, research scientist at NVIDIA and lead author on the project.

“Because we trained on real images instead of the typical pipeline, which relies on synthetic data, the AI model generalizes better to real-world applications,” said NVIDIA researcher Jun Gao, an author on the project.

The research behind GANverse3D will be presented at two upcoming conferences: the International Conference on Learning Representations in May, and the Conference on Computer Vision and Pattern Recognition, in June.

From Flat Tire to Racing KITT 

Creators in gaming, architecture and design rely on virtual environments like the NVIDIA Omniverse simulation and collaboration platform to test out new ideas and visualize prototypes before creating their final products. With Omniverse Connectors, developers can use their preferred 3D applications in Omniverse to simulate complex virtual worlds with real-time ray tracing.

But not every creator has the time and resources to create 3D models of every object they sketch. The cost of capturing the number of multi-view images necessary to render a showroom’s worth of cars, or a street’s worth of buildings, can be prohibitive.

That’s where a trained GANverse3D application can be used to convert standard images of a car, a building or even a horse into a 3D figure that can be customized and animated in Omniverse.

To recreate KITT, the researchers simply fed the trained model an image of the car, letting GANverse3D predict a corresponding 3D textured mesh, as well as different parts of the vehicle such as wheels and headlights. They then used NVIDIA Omniverse Kit and NVIDIA PhysX tools to convert the predicted texture into high-quality materials that give KITT a more realistic look and feel, and placed it in a dynamic driving sequence.

“Omniverse allows researchers to bring exciting, cutting-edge research directly to creators and end users,” said Jean-Francois Lafleche, deep learning engineer at NVIDIA. “Offering GANverse3D as an extension in Omniverse will help artists create richer virtual worlds for game development, city planning or even training new machine learning models.”

GANs Power a Dimensional Shift

Because real-world datasets that capture the same object from different angles are rare, most AI tools that convert images from 2D to 3D are trained using synthetic 3D datasets like ShapeNet.

To obtain multi-view images from real-world data — like images of cars available publicly on the web — the NVIDIA researchers instead turned to a GAN model, manipulating its neural network layers to turn it into a data generator.

The team found that opening the first four layers of the neural network and freezing the remaining 12 caused the GAN to render images of the same object from different viewpoints.

Keeping the first four layers frozen and the other 12 layers variable caused the neural network to generate different images from the same viewpoint. By manually assigning standard viewpoints, with vehicles pictured at a specific elevation and camera distance, the researchers could rapidly generate a multi-view dataset from individual 2D images.

The final model, trained on 55,000 car images generated by the GAN, outperformed an inverse graphics network trained on the popular Pascal3D dataset.

Read the full ICLR paper, authored by Wenzheng Chen, fellow NVIDIA researchers Jun Gao and Huan Ling, Sanja Fidler, director of NVIDIA’s Toronto research lab, University of Waterloo student Yuxuan Zhang, Stanford student Yinan Zhang and MIT professor Antonio Torralba. Additional collaborators on the CVPR paper include Jean-Francois Lafleche, NVIDIA researcher Kangxue Yin and Adela Barriuso.

The NVIDIA Research team consists of more than 200 scientists around the globe, focusing on areas such as AI, computer vision, self-driving cars, robotics and graphics. Learn more about the company’s latest research and industry breakthroughs in NVIDIA CEO Jensen Huang’s keynote address at this week’s GPU Technology Conference.

GTC registration is free, and open through April 23. Attendees will have access to on-demand content through May 11.

Knight Rider content courtesy of Universal Studios Licensing LLC. 

The post Knight Rider Rides a GAN: Bringing KITT to Life with AI, NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

NVIDIA RTX Lights Up the Night in Stunning Demos at GTC

NVIDIA is putting complex night scenes in a good light.

A demo at GTC21 this week showcased how NVIDIA RTX Direct Illumination (RTXDI) technology is paving the way for realistic lighting in graphics. The clip shows thousands of dynamic lights as they move, turn on and off, change color, show reflections and cast shadows.

People can also experience the latest technologies in graphics with the new RTX Technology Showcase, a playable demo that allows developers to explore an attic scene and interact with elements while seeing the visual impact of real-time ray tracing.

Hero Lighting Gets a Boost with RTXDI

Running on an NVIDIA GeForce RTX 3090 GPU, the RTXDI demo shows how dynamic, animated lights can be rendered in real time.

Creating realistic night scenes in computer graphics requires lights to be simulated all at once. RTXDI does this by allowing developers and artists to create cinematic visuals with realistic lighting, incredible reflections and accurate shadows through real-time ray tracing.

Traditionally, creating realistic lighting required complex baking solutions and was limited to a small number of “hero” lights. RTXDI removes such barriers by combining ray tracing and a deep learning algorithm called spatio-temporal importance resampling (ReSTIR) to create realistic dynamic lighting.

Developers and artists can now easily integrate animated and color-changing lights into their scenes, without baking or relying on just a handful of hero lights.

Based on NVIDIA research, RTXDI enables direct lighting from millions of moving light sources, without requiring any complex data structures to be built. From fireworks in the sky to billboards in New York Times Square, all of that complex lighting can now be captured in real time with RTXDI.

And RTXDI works even better when combined with additional NVIDIA technology, such as:

Learn more and check out RTXDI, which is now available.

Hit the Light Spots in RTX Technology Showcase

The RTX Technology Showcase features discrete ray-tracing capabilities, so users can choose to turn on specific technologies and immediately view their effects within the attic scene.

Watch the RTX Technology Showcase in action:

Developers can download the demo to discover the latest and greatest in ray-tracing innovations with RTX Technology Showcase.

Check out other GTC demos that highlight the latest technologies in graphics, and a full track for game developers here. Watch a replay of the GTC keynote address by NVIDIA CEO Jensen Huang to catch up on the latest graphics announcements.

The post NVIDIA RTX Lights Up the Night in Stunning Demos at GTC appeared first on The Official NVIDIA Blog.

Read More

Healthcare Headliners Put AI Under the Microscope at GTC

Two revolutions are meeting in the field of life sciences — the explosion of digital data and the rise of AI computing to help healthcare professionals make sense of it all, said Daphne Koller and Kimberly Powell at this week’s GPU Technology Conference,.

Powell, NVIDIA’s vice president of healthcare, presented an overview of AI innovation in medicine that highlighted advances in drug discovery, medical imaging, genomics and intelligent medical instruments.

“There’s a digital biology revolution underway, and it’s generating enormous data, far too complex for human understanding,” she said. “With algorithms and computations at the ready, we now have the third ingredient — data — to truly enter the AI healthcare era.”

And Koller, a Stanford adjunct professor and CEO of the AI drug discovery company Insitro, focused on AI solutions in her talk outlining the challenges of drug development and the ways in which predictive machine learning models can enable a better understanding of disease-related biological data.

Digital biology “allows us to measure biological systems in entirely new ways, interpret what we’re measuring using data science and machine learning, and then bring that back to engineer biology to do things that we’d never otherwise be able to do,” she said.

Watch replays of these talks — part of a packed lineup of more than 100 healthcare sessions among 1,600 on-demand sessions — by registering free for GTC through April 23. Registration isn’t required to watch a replay of the keynote address by NVIDIA CEO Jensen Huang.

Data-Driven Insights into Disease

Recent advancements in biotechnology — including CRISPR, induced pluripotent stem cells and more widespread availability of DNA sequencing — have allowed scientists to gather “mountains of data,” Koller said in her talk, “leaving us with a problem of how to interpret those data.”

“Fortunately, this is where the other revolution comes in, which is that using machine learning to interpret and identify patterns in very large amounts of data has transformed virtually every sector of our existence,” she said.

The data-intensive process of drug discovery requires researchers to understand the biological structure of a disease, and then vet potential compounds that could be used to bind with a critical protein along the disease pathway. Finding a promising therapeutic is a complex optimization problem, and despite the exponential rise in the amount of digital data available in the last decade or two, the process has been getting slower and more expensive.

Daphne Koller, CEO of Insitro

Known as Eroom’s law, this observation finds that the research and development cost for bringing a new drug to market has trended upward since the 1980s, taking pharmaceutical companies more time and money. Koller says that’s because of all the potential drug candidates that fail to get approved for use.

“What we aim to do at Insitro is to understand those failures, and try and see whether machine learning — combined with the right kind of data generation — can get us to make better decisions along the path and avoid a lot of those failures,” she said. “Machine learning is able to see things that people just cannot see.”

Bringing AI to vast datasets can help scientists determine how physical characteristics like height and weight, known as phenotypes, relate to genetic variants, known as genotypes. In many cases, “these associations give us a hint about the causal drivers of disease,” said Koller.

She gave the example of NASH, or nonalcoholic steatohepatitis, a common liver condition related to obesity and diabetes. To study underlying causes and potential treatments for NASH, Insitro worked with biopharmaceutical company Gilead to apply machine learning to liver biopsy and RNA sequencing data from clinical trial data representing hundreds of patients.

The team created a machine learning model to analyze biopsy images to capture a quantitative representation of a patient’s disease state, and found even with just a weak level of supervision, the AI’s predictions aligned with the scores assigned by clinical pathologists. The models could even differentiate between images with and without NASH, which is difficult to determine with the naked eye.

Accelerating the AI Healthcare Era

It’s not enough to just have abundant data to create an effective deep learning model for medicine, however. Powell’s GTC talk focused on domain-specific computational platforms — like the NVIDIA Clara application framework for healthcare — that are tailored to the needs and quirks of medical datasets.

The NVIDIA Clara Discovery suite of AI libraries harnesses transformer models, popular in natural language processing, to parse biomedical deta. Using the NVIDIA Megatron framework for training transformers helps researchers build models with billions of parameters — like MegaMolBart, an NLP generative drug discovery model in development by NVIDIA and AstraZeneca for use in reaction prediction, molecular optimization and de novo molecular generation.

Kimberly Powell, VP of healthcare at NVIDIA

University of Florida Health has also used the NVIDIA Megatron framework and NVIDIA BioMegatron pre-trained model to develop GatorTron, the largest clinical language model to date, which was trained on more than 2 million patient records with more than 50 million interactions.

“With biomedical data at scale of petabytes, and learning at the scale of billions and soon trillions of parameters, transformers are helping us do and find the unexpected,” Powell said.

Clinical decisions, too, can be supported by AI insights that parse data from health records, medical imaging instruments, lab tests, patient monitors and surgical procedures.

“No one hospital’s the same, and no healthcare practice is the same,” Powell said. “So we need an entire ecosystem approach to developing algorithms that can predict the future, see the unseen, and help healthcare providers make complex decisions.”

The NVIDIA Clara framework has more than 40 domain-specific pretrained models available in the NGC catalog — including NVIDIA Federated Learning, which allows different institutions to collaborate on AI model development without sharing patient data with each other, overcoming challenges of data governance and privacy.

And to power the next generation of intelligent medical instruments, the newly available NVIDIA Clara AGX developer kit helps hospitals develop and deploy AI across smart sensors such as endoscopes, ultrasound devices and microscopes.

“As sensor technology continues to innovate, so must the computing platforms that process them,” Powell said. “With AI, instruments can become smaller, cheaper and guide an inexperienced user through the acquisition process.”

These AI-driven devices could help reach areas of the world that lack access to many medical diagnostics today, she said. “The instruments that measure biology, see inside our bodies and perform surgeries are becoming intelligent sensors with AI and computing.”

GTC registration is open through April 23. Attendees will have access to on-demand content through May 11. For more, subscribe to NVIDIA healthcare news, and follow NVIDIA Healthcare on Twitter.

The post Healthcare Headliners Put AI Under the Microscope at GTC appeared first on The Official NVIDIA Blog.

Read More

You Put a Spell on Me: GFN Thursdays Are Rewarding, 15 New Games Added This Week

This GFN Thursday — when GeForce NOW members can learn what new games and updates are streaming from the cloud — we’re adding 15 games to the service, with new content, including NVIDIA RTX and DLSS in a number of games.

Plus, we have a GeForce NOW Reward for Spellbreak from our friends at Proletariat.

Rewards Are Rewarding

One of the benefits of being a GeForce NOW member is gaining access to exclusive rewards. These can include free games, in-game content, discounts and more.

This week, we’re offering the Noble Oasis outfit, a rare outfit from the game Spellbreak that’s exclusive to GeForce NOW members.

Play Spellbreak on GeForce NOW
Unleash your inner battlemage in Spellbreak, streaming on GeForce NOW,

Spellbreak Chapter 2: The Fracture is streaming on GeForce NOW. This massive update, released just last week, includes Dominion, the new 5 vs 5 team capture-point game mode. It also introduced Leagues, the new competitive ranking mode where players work their way up through Bronze, Silver and all the way to Legend. There were new map updates and gameplay changes as well, making it their biggest update yet.

Founders members will have first crack at the reward, starting today. It’s another benefit to thank you for gaming with us. Priority members are next in line and can look for their opportunity to redeem starting on Friday, April 16. Free members gain access on Tuesday, April 20.

It’s first come, first served, so be sure to redeem your reward as soon as you have access!

The Spellbreak in-game reward is the latest benefit for GeForce NOW members; others have included rewards for Discord, ARK: Survival Evolved, Hyperscape, Warface, Warframe and more.

Signing up for GeForce NOW Rewards is simple. Log in to your NVIDIA GeForce NOW account, click “Update Rewards Settings” and check the box.

Updates to Your Catalog

GeForce NOW members are getting updates to a few games this week in the form of new expansions or RTX support.

Path of Exile, the popular free-to-play, online, action RPG is getting an expansion in Path of Exile: Ultimatum. It contains the Ultimatum challenge league, eight new Skill and Support Gems, improvements to Vaal Skills, an overhaul to past league reward systems, dozens of new items, and much more.

Meanwhile, three games are adding RTX support with real-time, ray-traced graphics and/or NVIDIA DLSS. Mortal Shell gets the full complement of RTX support, while Medieval Dynasty and Observer System Redux get DLSS support to improve image quality while maintaining framerate.

Let’s Play Today

Nigate Tail on GeForce NOW
Nigate Tale is one of 15 games joining the GeForce NOW library.

Of course, GFN Thursday has even more games in store for members. This week we welcomed Nigate Tale, day-and-date with its Steam release on Tuesday. It’s currently available for 15 percent off through April 18. Members can also look for 14 additional games to join our library. Complete list below:

Excited for the reward? Looking forward to streaming one of this week’s new releases or new content? Let us know on Twitter or in the comments below.

The post You Put a Spell on Me: GFN Thursdays Are Rewarding, 15 New Games Added This Week appeared first on The Official NVIDIA Blog.

Read More

Accelerated Portfolios: NVIDIA Inception VC Alliance Connects Top Investors with Leading AI Startups

To better connect venture capitalists with NVIDIA and promising AI startups, we’ve introduced the NVIDIA Inception VC Alliance. This initiative, which VCs can apply to now, aims to fast-track the growth for thousands of AI startups around the globe by serving as a critical nexus between the two communities.

AI adoption is growing across industries and startup funding has, of course, been booming. Investment in AI companies increased 52 percent last year to $52.1 billion, according to PitchBook.

A thriving AI ecosystem depends on both VCs and startups. The alliance aims to help investment firms identify and support leading AI startups early as part of their effort to realize meaningful returns down the line.

Introduced at GTC’s AI Day for VCs 

The alliance was unveiled this week by Jeff Herbst, vice president of business development and head of Inception at NVIDIA, speaking at the AI Day for VCs at GTC, NVIDIA’s annual tech conference.

“Being at the forefront of AI and data science puts NVIDIA Inception in a position to help educate and nurture both VCs and startups, which is why we’ve launched the NVIDIA Inception VC Alliance,” Herbst said. “By creating opportunities for our VC partners to connect with our Inception members, we strive to help advance the businesses of both. The reality is that startups need VCs and VCs need startups and we want to help be a part of that.”

Herbst said that founding members of the alliance include venture firms NEA, Acrew, Mayfield, Madrona Venture Group, In-Q-Tel, Pitango, Vanedge Capital and Our Crowd.

“AI startups are at the forefront of innovation and the NVIDIA Inception VC Alliance will bring them even closer access to the leading venture capital firms investing in AI,” said Greg Papadopoulos, venture partner at NEA. “Exposure to world-class capabilities from VC partners creates a fast-track for AI startups to accelerate their business in a way that benefits their stakeholders, customers and investors.”

Watch a recap of the announcement during the GTC session, “NVIDIA Inception and the Venture Capital Ecosystem – Supporting the Next Generation of AI Startups.”

Tapping Into NVIDIA’s Vast Startup Ecosystem

The NVIDIA Inception VC Alliance is part of the NVIDIA Inception program, an acceleration platform for over 7,500 startups working in AI, data science and HPC, representing every major industry and located in more than 90 countries.

Among its benefits, the alliance offers VCs exclusive access to high-profile events, visibility into top startups actively raising funds, and access to growth resources for portfolio companies.

VC alliance members can further nurture their portfolios by having their startups join NVIDIA Inception, which offers go-to-market support, infrastructure discounts and credits, AI training through NVIDIA’s Deep Learning Institute, and technology assistance.

Interested VCs are invited to join the NVIDIA Inception VC Alliance by applying here.

The post Accelerated Portfolios: NVIDIA Inception VC Alliance Connects Top Investors with Leading AI Startups appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University

Scientific discovery powered by supercomputing has the potential to transform the world with research that benefits science, industry and society. A new open, cloud-native supercomputer at Cambridge University offers unrivaled performance that will enable researchers to pursue exploration like never before.

The Cambridge Service for Data Driven Discovery, or CSD3 for short, is a UK National Research Cloud and one of the world’s most powerful academic supercomputers. It’s hosted at the University of Cambridge and funded by UKRI via STFC DiRAC, STFC IRIS, EPSRC, MRC and UKAEA.

The site, home to the U.K.’s largest academic research cloud environment, is now being enhanced by a new 4-petaflops Dell-EMC system with NVIDIA A100 GPUs, NVIDIA BlueField DPUs and NVIDIA InfiniBand networking that will deliver secured, multi-tenant, bare-metal high performance computing AI and data analytics services for a broad cross section of the U.K. national research community. The CSD3 employs a new cloud-native supercomputing platform enabled by NVIDIA and a revolutionary cloud HPC software stack, called Scientific OpenStack, developed by the University of Cambridge and StackHPC with funding from the DiRAC HPC Facility and the IRIS Facility.

The CSD3 system is projected to deliver a 4 PFLOPS of performance at deployment, ranking it among the top 500 supercomputers in the world. The system uses NVIDIA GPUs and x86 CPUs to provide over 10 petaflops of total performance and it includes the U.K.’s fastest solid-state storage array based on the Dell/Cambridge data accelerator.

The CSD3 provides open and secure access for researchers aiming to tackle some of the world’s most challenging problems across diverse fields such as astrophysics, nuclear fusion power generation development and lifesaving clinical medicine applications. It will advance scientific exploration using converged simulation, AI and data analytics workflows that members of the research community can access more easily and securely without sacrificing application performance or slowing work.

NVIDIA DPUs, HDR InfiniBand Power Next-Generation Systems

CSD3 is enabled by the NVIDIA HDR 200G InfiniBand-connected BlueField-2 DPU to offload infrastructure management such as security policies and storage frameworks from the host while providing acceleration and isolation for workloads to maximize input/output performance.

“Providing an easy and secure way to access the immense computing power of CSD3 is crucial to ushering in a new generation of scientific exploration that serves both the scientific community and industry in the U.K.,“ said Paul Calleja, director of Research Computing Services at Cambridge University. “The extreme performance of NVIDIA InfiniBand, together with the offloading, isolation and acceleration of workloads provided by BlueField DPUs, combined with our ‘Scientific OpenStack’ has enabled Cambridge University to provide a world-class cloud-native supercomputer for driving research that will benefit all of humankind.”

Networking performance is further accelerated by NVIDIA HDR InfiniBand’s In-Network Computing engines, providing optimal bare-metal performance, while natively supporting multi-node tenant isolation.  CSD3 also takes advantage of the latest-generation of Dell-EMC PowerEdge portfolio, with Dell EMC PowerEdge C6520 and PowerEdge XE8545 servers both optimized for data-intensive and AI workloads.

CSD3 is expected to be operational later this year. Learn more about CSD3.

The post NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University appeared first on The Official NVIDIA Blog.

Read More

Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer?

Cloud-native supercomputing is the next big thing in supercomputing, and it’s here today, ready to tackle the toughest HPC and AI workloads.

The University of Cambridge is building a cloud-native supercomputer in the UK. Two teams of researchers in the U.S. are separately developing key software elements for cloud-native supercomputing.

The Los Alamos National Laboratory, as part of its ongoing collaboration with the UCF Consortium, is helping to deliver capabilities that accelerate data algorithms. Ohio State University is updating Message Passing Interface software to enhance scientific simulations.

NVIDIA is making cloud-native supercomputers available to users worldwide in the form of its latest DGX SuperPOD. It packs key ingredients such as the NVIDIA BlueField-2 data processing unit (DPU) now in production.

So, What Is Cloud-Native Supercomputing?

Like Reese’s treats that wrap peanut butter in chocolate, cloud-native supercomputing combines the best of two worlds.

Cloud-native supercomputers blend the power of high performance computing with the security and ease of use of cloud computing services.

Put another way, cloud-native supercomputing provides an HPC cloud with a system as powerful as a TOP500 supercomputer that multiple users can share securely, without sacrificing the performance of their applications.

cloud-native supercomputer chart
A BlueField DPU supports offload of security, communications and management tasks to create an efficient cloud-native supercomputer.

What Can Cloud-Native Supercomputers Do?

Cloud-native supercomputers pack two key features.

First, they let multiple users share a supercomputer while ensuring that each user’s workload stays secure and private. It’s a capability known as “multi-tenant isolation” that’s available in today’s commercial cloud computing services. But it’s typically not found in HPC systems used for technical and scientific workloads where raw performance is the top priority and security services once slowed operations.

Second, cloud-native supercomputers use DPUs to handle tasks such as storage, security for tenant isolation and systems management. This offloads the CPU to focus on processing tasks, maximizing overall system performance.

The result is a supercomputer that enables native cloud services without a loss in performance. Looking forward, DPUs can handle additional offload tasks, so systems maintain peak efficiency running HPC and AI workloads.

How Do Cloud-Native Supercomputers Work?

Under the hood, today’s supercomputers couple two kinds of brains — CPUs and accelerators, typically GPUs.

Accelerators pack thousands of processing cores to speed parallel operations at the heart of many AI and HPC workloads. CPUs are built for the parts of algorithms that require fast serial processing. But over time they’ve become burdened with growing layers of communications tasks needed to manage increasingly large and complex systems.

Cloud-native supercomputers include a third brain to build faster, more efficient systems. They add DPUs that offload security, communications, storage and other jobs modern systems need to manage.

A Commuter Lane for Supercomputers

In traditional supercomputers, a computing job sometimes has to wait while the CPU handles a communications task. It’s a familiar problem that generates what’s called system noise.

In cloud-native supercomputers, computing and communications flow in parallel. It’s like opening a third lane on a highway to help all traffic flow more smoothly.

Early tests show cloud-native supercomputers can perform HPC jobs 1.4x faster than traditional ones, according to work at the MVAPICH lab at Ohio State, a specialist in HPC communications. The lab also showed cloud-native supercomputers achieve a 100 percent overlap of compute and communications functions, 99 percent higher than existing HPC systems.

Experts Speak on Cloud-Native Supercomputing

That’s why around the world, cloud-native supercomputing is coming online.

“We’re building the first academic cloud-native supercomputer in Europe to offer bare-metal performance with cloud-native InfiniBand services,” said Paul Calleja, director of computing at the University of Cambridge.

“This system, which would rank among the top 100 in the November 2020 TOP500 list, will enable our researchers to optimize their applications using the latest advances in supercomputing architecture,” he added.

HPC specialists are paving the way for further advances in cloud-native supercomputers.

“The UCF consortium of industry and academic leaders is creating the production-grade communication frameworks and open standards needed to enable the future for cloud-native supercomputing,” said Steve Poole, speaking in his role as director of the Unified Communication Framework, whose members include representatives from Arm, IBM, NVIDIA, U.S. national labs and U.S. universities.

“Our tests show cloud-native supercomputers have the architectural efficiencies to lift supercomputers to the next level of HPC performance while enabling new security features,” said Dhabaleswar K. (DK) Panda, a professor of computer science and engineering at Ohio State and lead of its Network-Based Computing Laboratory.

Learn More About Cloud-Native Supercomputers

To learn more, check out our technical overview on cloud-native supercomputing. You can also find more information online about the new system at the University of Cambridge and NVIDIA’s new cloud-native supercomputer.

And to get the big picture on the latest advances in HPC, AI and more, watch the GTC keynote.

 

The post Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer? appeared first on The Official NVIDIA Blog.

Read More

From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector

Whether helping the world understand our most immediate threats, like COVID-19, or seeing the future of landing humans on Mars, researchers are increasingly leaning on scientific visualization to analyze, understand and extract scientific insights.

With large-scale simulations generating tens or even hundreds of terabytes of data, and with team members dispersed around the globe, researchers need tools that can both enhance these visualizations and help them work simultaneously across different high performance computing systems.

NVIDIA Omniverse is a real-time collaboration platform that lets users share 2D and 3D simulation data in universal scene description (USD) format from their preferred content creation and visualization applications. Global teams can use Omniverse to view, interact with and update the same dataset with a live connection, making collaboration truly interactive.

Omniverse ParaView Connector

The platform has expanded to address the scientific visualization community and now includes a connector to ParaView, one of the world’s most popular scientific visualization applications. Researchers use ParaView on their local workstations or on HPC systems to analyze large datasets for a variety of domains, including astrophysics, climate and weather, fluid dynamics and structural analysis.

With the availability of the Omniverse ParaView Connector, announced at GTC21, researchers can boost their productivity and speed their discoveries. Large datasets no longer need to be downloaded and exchanged, and colleagues can get instantaneous feedback as Omniverse users can work in the same workspace in the cloud.

Chart showing the NVIDIA Omniverse-pipeline
The NVIDIA Omniverse pipeline

Users can upload their USD format data to the Omniverse Nucleus DB from various application connectors, including the ParaView Connector. The clients then connect to the Omniverse Kit and take advantage of:

  • Photorealistic visuals – Users can leverage a variety of core NVIDIA technologies such as real-time ray tracing, photorealistic materials, depth of field, and advanced lighting and shading through the Omniverse platform’s components such as Omniverse RTX Renderer. This enables researchers to better visualize and understand the results of their simulations leading to deeper insights.
  • Access to high-end visualization tools – Omniverse users can open and interact with USD files through a variety of popular applications like SideFX Houdini, Autodesk Maya and NVIDIA IndeX. See documentation on how to work with various applications in Omniverse to maximize analysis.
  • Interactivity at scale – Analyzing part of a dataset at a time through batched renderings is time-consuming. And traditional applications are too slow to render features like ray tracing, soft shadows and depth of field in real time, which are required for a fast and uninterrupted analysis. Now, users can intuitively interact with entire datasets in their original resolution at high frame rates for better and faster discoveries.

NVIDIA IndeX provides interactive visualization for large-scale volumetric data, allowing users to zoom in on the smallest details for any timestep in real time. With IndeX soon coming to Omniverse, users will be able to take advantage of both technologies for better and faster scientific analysis. This GTC session will go over what researchers can unlock when IndeX connects to Omniverse.

HPC NVIDIA IndeX Omniverse visualization
Visualization of Mars Lander using NVIDIA IndeX in NVIDIA Omniverse. Simulation data courtesy of NASA.
  • Real-time collaboration – Omniverse simplifies workflows by eliminating the need to download data on different systems. It also increases productivity by allowing researchers on different systems to visualize, analyze and modify the same data at the same time.
  • Publish cinematic visuals – Outreach is an essential part of scientific publications. With high-end rendering tools on the Omniverse platform, researchers and artists can interact in real time to transform their work into cinematic visuals that are easy for wide audiences to understand.

“Traditionally, scientists generate visualizations that are useful for data analysis, but are not always aesthetic and straightforward to understand by a broader audience,” said Brad Carvey, an Emmy and Addy Award-winning visualization research engineer at Sandia National Labs. “To generate a range of visualizations, I use ParaView, Houdini FX, Substance Painter, Photoshop and other applications. Omniverse allows me to use all of these tools, interactively, to create what I call ‘impactful visualizations.’”

Learn More from Omniverse Experts

Attend the following GTC sessions to dive deeper into the features and benefits of Omniverse and the ParaView connector:

Get Started Today

The Omniverse ParaView Connector is coming soon to Omniverse. Download and get started with Omniverse open beta here.

The post From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector appeared first on The Official NVIDIA Blog.

Read More

Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing

As businesses extend the power of AI and data science to every developer, IT needs to deliver seamless, scalable access to supercomputing with cloud-like simplicity and security.

At GTC21, we introduced the latest NVIDIA DGX SuperPOD, which gives business, IT and their users a platform for securing and scaling AI across the enterprise, with the necessary software to manage it as well as a white-glove services experience to help operationalize it.

Solving AI Challenges of Every Size, at Massive Scale

Since its introduction, DGX SuperPOD has enabled enterprises to scale their development on infrastructure that can tackle problems of a size and complexity that were previously unsolvable in a reasonable amount of time. It’s AI infrastructure built and managed the way NVIDIA does its own.

As AI gets infused into almost every aspect of modern business, the need to deliver almost limitless access to computational resources powering development has been scaling exponentially. This escalation in demand is exemplified by business-critical applications like natural language processing, recommender systems and clinical research.

Organizations often tap into the power of DGX SuperPOD in two ways. Some use it to solve huge, monolithic problems such as conversational AI, where the computational power of an entire DGX SuperPOD is brought to bear to accelerate the training of complex natural language processing models.

Others use DGX SuperPOD to service an entire company, providing multiple teams access to the system to support fluctuating needs across a wide variety of projects. In this mode, enterprise IT is often acting as a service provider, managing this AI infrastructure-as-a-service, with multiple users (perhaps even adversarial ones) who need and expect complete isolation of each other’s work and data.

DGX SuperPOD with BlueField DPU

Increasingly, businesses need to bring the world of high-performance AI supercomputing into an operational mode where many developers can be assured their work is secure and isolated like it is in cloud. And where IT can manage the environment much like a private cloud, with the ability to deliver resources to jobs, right-sized to the task, in a secure, multi-tenant environment.

This is called cloud-native supercomputing and it’s enabled by NVIDIA BlueField-2 DPUs, which bring accelerated, software-defined data center networking, storage, security and management services to AI infrastructure.

With a data processing unit optimized for enterprise deployment and 200 Gbps network connectivity, enterprises gain state-of-the-art, accelerated, fully programmable networking that implements zero trust security to protect against breaches, and isolate users and data, with bare-metal performance.

Every DGX SuperPOD now has this capability with the integration of two NVIDIA BlueField-2 DPUs in each DGX A100 node within it. IT administrators can use the offload, accelerate and isolate capabilities of NVIDIA BlueField DPUs to implement secure multi-tenancy for shared AI infrastructure without impacting the AI performance of the DGX SuperPOD.

Infrastructure Management with Base Command Manager

Every week, NVIDIA manages thousands of AI workloads executed on our internal DGX SATURNV infrastructure, which includes over 2,000 DGX systems. To date, we’ve run over 1.2 million jobs on it supporting over 2,500 developers across more than 200 teams. We’ve also been developing state-of-the-art infrastructure management software that ensures every NVIDIA developer is fully productive as they perform their research and develop our autonomous systems technology, robotics, simulations and more.

The software supports all this work, simplifies and streamlines management, and lets our IT team monitor health, utilization, performance and more. We’re adding this same software, called NVIDIA Base Command Manager, to DGX SuperPOD so businesses can run their environments the way we do. We’ll continuously improve Base Command Manager, delivering the latest innovations to customers automatically.

White-Glove Services

Deploying AI infrastructure is more than just installing servers and storage in data center racks. When a business decides to scale AI, they need a hand-in-glove experience that guides them from design to deployment to operationalization, without burdening their IT team to figure out how to run it, once the “keys” are handed over.

With DGX SuperPOD White Glove Services, customers enjoy a full lifecycle services experience that’s backed by proven expertise from install to operations. Customers benefit from pre-delivery performance certified on NVIDIA’s own acceptance cluster, which validates the deployed system is running at specification before it’s handed off.

White Glove Services also include a dedicated multidisciplinary NVIDIA team that covers everything from installation to infrastructure management to workflow to addressing performance-impacting bottlenecks and optimizations. The services are designed to give IT leaders peace of mind and confidence as they entrust their business to DGX SuperPOD.

DGX SuperPOD at GTC21

To learn more about DGX SuperPOD and how you can consolidate AI infrastructure and centralize development across your enterprise, check out our session presented by Charlie Boyle, vice president and general manager of DGX Systems, who will cover our DGX SuperPOD news and more in two separate sessions at GTC:

Register for GTC, which runs through April 16, for free.

Learn more:

The post Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing appeared first on The Official NVIDIA Blog.

Read More