NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University

Scientific discovery powered by supercomputing has the potential to transform the world with research that benefits science, industry and society. A new open, cloud-native supercomputer at Cambridge University offers unrivaled performance that will enable researchers to pursue exploration like never before.

The Cambridge Service for Data Driven Discovery, or CSD3 for short, is a UK National Research Cloud and one of the world’s most powerful academic supercomputers. It’s hosted at the University of Cambridge and funded by UKRI via STFC DiRAC, STFC IRIS, EPSRC, MRC and UKAEA.

The site, home to the U.K.’s largest academic research cloud environment, is now being enhanced by a new 4-petaflops Dell-EMC system with NVIDIA A100 GPUs, NVIDIA BlueField DPUs and NVIDIA InfiniBand networking that will deliver secured, multi-tenant, bare-metal high performance computing AI and data analytics services for a broad cross section of the U.K. national research community. The CSD3 employs a new cloud-native supercomputing platform enabled by NVIDIA and a revolutionary cloud HPC software stack, called Scientific OpenStack, developed by the University of Cambridge and StackHPC with funding from the DiRAC HPC Facility and the IRIS Facility.

The CSD3 system is projected to deliver a 4 PFLOPS of performance at deployment, ranking it among the top 500 supercomputers in the world. The system uses NVIDIA GPUs and x86 CPUs to provide over 10 petaflops of total performance and it includes the U.K.’s fastest solid-state storage array based on the Dell/Cambridge data accelerator.

The CSD3 provides open and secure access for researchers aiming to tackle some of the world’s most challenging problems across diverse fields such as astrophysics, nuclear fusion power generation development and lifesaving clinical medicine applications. It will advance scientific exploration using converged simulation, AI and data analytics workflows that members of the research community can access more easily and securely without sacrificing application performance or slowing work.

NVIDIA DPUs, HDR InfiniBand Power Next-Generation Systems

CSD3 is enabled by the NVIDIA HDR 200G InfiniBand-connected BlueField-2 DPU to offload infrastructure management such as security policies and storage frameworks from the host while providing acceleration and isolation for workloads to maximize input/output performance.

“Providing an easy and secure way to access the immense computing power of CSD3 is crucial to ushering in a new generation of scientific exploration that serves both the scientific community and industry in the U.K.,“ said Paul Calleja, director of Research Computing Services at Cambridge University. “The extreme performance of NVIDIA InfiniBand, together with the offloading, isolation and acceleration of workloads provided by BlueField DPUs, combined with our ‘Scientific OpenStack’ has enabled Cambridge University to provide a world-class cloud-native supercomputer for driving research that will benefit all of humankind.”

Networking performance is further accelerated by NVIDIA HDR InfiniBand’s In-Network Computing engines, providing optimal bare-metal performance, while natively supporting multi-node tenant isolation.  CSD3 also takes advantage of the latest-generation of Dell-EMC PowerEdge portfolio, with Dell EMC PowerEdge C6520 and PowerEdge XE8545 servers both optimized for data-intensive and AI workloads.

CSD3 is expected to be operational later this year. Learn more about CSD3.

The post NVIDIA Accelerates World’s First TOP500 Academic Cloud-Native Supercomputer to Advance Research at Cambridge University appeared first on The Official NVIDIA Blog.

Read More

Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer?

Cloud-native supercomputing is the next big thing in supercomputing, and it’s here today, ready to tackle the toughest HPC and AI workloads.

The University of Cambridge is building a cloud-native supercomputer in the UK. Two teams of researchers in the U.S. are separately developing key software elements for cloud-native supercomputing.

The Los Alamos National Laboratory, as part of its ongoing collaboration with the UCF Consortium, is helping to deliver capabilities that accelerate data algorithms. Ohio State University is updating Message Passing Interface software to enhance scientific simulations.

NVIDIA is making cloud-native supercomputers available to users worldwide in the form of its latest DGX SuperPOD. It packs key ingredients such as the NVIDIA BlueField-2 data processing unit (DPU) now in production.

So, What Is Cloud-Native Supercomputing?

Like Reese’s treats that wrap peanut butter in chocolate, cloud-native supercomputing combines the best of two worlds.

Cloud-native supercomputers blend the power of high performance computing with the security and ease of use of cloud computing services.

Put another way, cloud-native supercomputing provides an HPC cloud with a system as powerful as a TOP500 supercomputer that multiple users can share securely, without sacrificing the performance of their applications.

cloud-native supercomputer chart
A BlueField DPU supports offload of security, communications and management tasks to create an efficient cloud-native supercomputer.

What Can Cloud-Native Supercomputers Do?

Cloud-native supercomputers pack two key features.

First, they let multiple users share a supercomputer while ensuring that each user’s workload stays secure and private. It’s a capability known as “multi-tenant isolation” that’s available in today’s commercial cloud computing services. But it’s typically not found in HPC systems used for technical and scientific workloads where raw performance is the top priority and security services once slowed operations.

Second, cloud-native supercomputers use DPUs to handle tasks such as storage, security for tenant isolation and systems management. This offloads the CPU to focus on processing tasks, maximizing overall system performance.

The result is a supercomputer that enables native cloud services without a loss in performance. Looking forward, DPUs can handle additional offload tasks, so systems maintain peak efficiency running HPC and AI workloads.

How Do Cloud-Native Supercomputers Work?

Under the hood, today’s supercomputers couple two kinds of brains — CPUs and accelerators, typically GPUs.

Accelerators pack thousands of processing cores to speed parallel operations at the heart of many AI and HPC workloads. CPUs are built for the parts of algorithms that require fast serial processing. But over time they’ve become burdened with growing layers of communications tasks needed to manage increasingly large and complex systems.

Cloud-native supercomputers include a third brain to build faster, more efficient systems. They add DPUs that offload security, communications, storage and other jobs modern systems need to manage.

A Commuter Lane for Supercomputers

In traditional supercomputers, a computing job sometimes has to wait while the CPU handles a communications task. It’s a familiar problem that generates what’s called system noise.

In cloud-native supercomputers, computing and communications flow in parallel. It’s like opening a third lane on a highway to help all traffic flow more smoothly.

Early tests show cloud-native supercomputers can perform HPC jobs 1.4x faster than traditional ones, according to work at the MVAPICH lab at Ohio State, a specialist in HPC communications. The lab also showed cloud-native supercomputers achieve a 100 percent overlap of compute and communications functions, 99 percent higher than existing HPC systems.

Experts Speak on Cloud-Native Supercomputing

That’s why around the world, cloud-native supercomputing is coming online.

“We’re building the first academic cloud-native supercomputer in Europe to offer bare-metal performance with cloud-native InfiniBand services,” said Paul Calleja, director of computing at the University of Cambridge.

“This system, which would rank among the top 100 in the November 2020 TOP500 list, will enable our researchers to optimize their applications using the latest advances in supercomputing architecture,” he added.

HPC specialists are paving the way for further advances in cloud-native supercomputers.

“The UCF consortium of industry and academic leaders is creating the production-grade communication frameworks and open standards needed to enable the future for cloud-native supercomputing,” said Steve Poole, speaking in his role as director of the Unified Communication Framework, whose members include representatives from Arm, IBM, NVIDIA, U.S. national labs and U.S. universities.

“Our tests show cloud-native supercomputers have the architectural efficiencies to lift supercomputers to the next level of HPC performance while enabling new security features,” said Dhabaleswar K. (DK) Panda, a professor of computer science and engineering at Ohio State and lead of its Network-Based Computing Laboratory.

Learn More About Cloud-Native Supercomputers

To learn more, check out our technical overview on cloud-native supercomputing. You can also find more information online about the new system at the University of Cambridge and NVIDIA’s new cloud-native supercomputer.

And to get the big picture on the latest advances in HPC, AI and more, watch the GTC keynote.

 

The post Cloud-Native Supercomputing Is Here: So, What’s a Cloud-Native Supercomputer? appeared first on The Official NVIDIA Blog.

Read More

From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector

Whether helping the world understand our most immediate threats, like COVID-19, or seeing the future of landing humans on Mars, researchers are increasingly leaning on scientific visualization to analyze, understand and extract scientific insights.

With large-scale simulations generating tens or even hundreds of terabytes of data, and with team members dispersed around the globe, researchers need tools that can both enhance these visualizations and help them work simultaneously across different high performance computing systems.

NVIDIA Omniverse is a real-time collaboration platform that lets users share 2D and 3D simulation data in universal scene description (USD) format from their preferred content creation and visualization applications. Global teams can use Omniverse to view, interact with and update the same dataset with a live connection, making collaboration truly interactive.

Omniverse ParaView Connector

The platform has expanded to address the scientific visualization community and now includes a connector to ParaView, one of the world’s most popular scientific visualization applications. Researchers use ParaView on their local workstations or on HPC systems to analyze large datasets for a variety of domains, including astrophysics, climate and weather, fluid dynamics and structural analysis.

With the availability of the Omniverse ParaView Connector, announced at GTC21, researchers can boost their productivity and speed their discoveries. Large datasets no longer need to be downloaded and exchanged, and colleagues can get instantaneous feedback as Omniverse users can work in the same workspace in the cloud.

Chart showing the NVIDIA Omniverse-pipeline
The NVIDIA Omniverse pipeline

Users can upload their USD format data to the Omniverse Nucleus DB from various application connectors, including the ParaView Connector. The clients then connect to the Omniverse Kit and take advantage of:

  • Photorealistic visuals – Users can leverage a variety of core NVIDIA technologies such as real-time ray tracing, photorealistic materials, depth of field, and advanced lighting and shading through the Omniverse platform’s components such as Omniverse RTX Renderer. This enables researchers to better visualize and understand the results of their simulations leading to deeper insights.
  • Access to high-end visualization tools – Omniverse users can open and interact with USD files through a variety of popular applications like SideFX Houdini, Autodesk Maya and NVIDIA IndeX. See documentation on how to work with various applications in Omniverse to maximize analysis.
  • Interactivity at scale – Analyzing part of a dataset at a time through batched renderings is time-consuming. And traditional applications are too slow to render features like ray tracing, soft shadows and depth of field in real time, which are required for a fast and uninterrupted analysis. Now, users can intuitively interact with entire datasets in their original resolution at high frame rates for better and faster discoveries.

NVIDIA IndeX provides interactive visualization for large-scale volumetric data, allowing users to zoom in on the smallest details for any timestep in real time. With IndeX soon coming to Omniverse, users will be able to take advantage of both technologies for better and faster scientific analysis. This GTC session will go over what researchers can unlock when IndeX connects to Omniverse.

HPC NVIDIA IndeX Omniverse visualization
Visualization of Mars Lander using NVIDIA IndeX in NVIDIA Omniverse. Simulation data courtesy of NASA.
  • Real-time collaboration – Omniverse simplifies workflows by eliminating the need to download data on different systems. It also increases productivity by allowing researchers on different systems to visualize, analyze and modify the same data at the same time.
  • Publish cinematic visuals – Outreach is an essential part of scientific publications. With high-end rendering tools on the Omniverse platform, researchers and artists can interact in real time to transform their work into cinematic visuals that are easy for wide audiences to understand.

“Traditionally, scientists generate visualizations that are useful for data analysis, but are not always aesthetic and straightforward to understand by a broader audience,” said Brad Carvey, an Emmy and Addy Award-winning visualization research engineer at Sandia National Labs. “To generate a range of visualizations, I use ParaView, Houdini FX, Substance Painter, Photoshop and other applications. Omniverse allows me to use all of these tools, interactively, to create what I call ‘impactful visualizations.’”

Learn More from Omniverse Experts

Attend the following GTC sessions to dive deeper into the features and benefits of Omniverse and the ParaView connector:

Get Started Today

The Omniverse ParaView Connector is coming soon to Omniverse. Download and get started with Omniverse open beta here.

The post From Scientific Analysis to Artistic Renderings, NVIDIA Omniverse Accelerates HPC Visualization with New ParaView Connector appeared first on The Official NVIDIA Blog.

Read More

Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing

As businesses extend the power of AI and data science to every developer, IT needs to deliver seamless, scalable access to supercomputing with cloud-like simplicity and security.

At GTC21, we introduced the latest NVIDIA DGX SuperPOD, which gives business, IT and their users a platform for securing and scaling AI across the enterprise, with the necessary software to manage it as well as a white-glove services experience to help operationalize it.

Solving AI Challenges of Every Size, at Massive Scale

Since its introduction, DGX SuperPOD has enabled enterprises to scale their development on infrastructure that can tackle problems of a size and complexity that were previously unsolvable in a reasonable amount of time. It’s AI infrastructure built and managed the way NVIDIA does its own.

As AI gets infused into almost every aspect of modern business, the need to deliver almost limitless access to computational resources powering development has been scaling exponentially. This escalation in demand is exemplified by business-critical applications like natural language processing, recommender systems and clinical research.

Organizations often tap into the power of DGX SuperPOD in two ways. Some use it to solve huge, monolithic problems such as conversational AI, where the computational power of an entire DGX SuperPOD is brought to bear to accelerate the training of complex natural language processing models.

Others use DGX SuperPOD to service an entire company, providing multiple teams access to the system to support fluctuating needs across a wide variety of projects. In this mode, enterprise IT is often acting as a service provider, managing this AI infrastructure-as-a-service, with multiple users (perhaps even adversarial ones) who need and expect complete isolation of each other’s work and data.

DGX SuperPOD with BlueField DPU

Increasingly, businesses need to bring the world of high-performance AI supercomputing into an operational mode where many developers can be assured their work is secure and isolated like it is in cloud. And where IT can manage the environment much like a private cloud, with the ability to deliver resources to jobs, right-sized to the task, in a secure, multi-tenant environment.

This is called cloud-native supercomputing and it’s enabled by NVIDIA BlueField-2 DPUs, which bring accelerated, software-defined data center networking, storage, security and management services to AI infrastructure.

With a data processing unit optimized for enterprise deployment and 200 Gbps network connectivity, enterprises gain state-of-the-art, accelerated, fully programmable networking that implements zero trust security to protect against breaches, and isolate users and data, with bare-metal performance.

Every DGX SuperPOD now has this capability with the integration of two NVIDIA BlueField-2 DPUs in each DGX A100 node within it. IT administrators can use the offload, accelerate and isolate capabilities of NVIDIA BlueField DPUs to implement secure multi-tenancy for shared AI infrastructure without impacting the AI performance of the DGX SuperPOD.

Infrastructure Management with Base Command Manager

Every week, NVIDIA manages thousands of AI workloads executed on our internal DGX SATURNV infrastructure, which includes over 2,000 DGX systems. To date, we’ve run over 1.2 million jobs on it supporting over 2,500 developers across more than 200 teams. We’ve also been developing state-of-the-art infrastructure management software that ensures every NVIDIA developer is fully productive as they perform their research and develop our autonomous systems technology, robotics, simulations and more.

The software supports all this work, simplifies and streamlines management, and lets our IT team monitor health, utilization, performance and more. We’re adding this same software, called NVIDIA Base Command Manager, to DGX SuperPOD so businesses can run their environments the way we do. We’ll continuously improve Base Command Manager, delivering the latest innovations to customers automatically.

White-Glove Services

Deploying AI infrastructure is more than just installing servers and storage in data center racks. When a business decides to scale AI, they need a hand-in-glove experience that guides them from design to deployment to operationalization, without burdening their IT team to figure out how to run it, once the “keys” are handed over.

With DGX SuperPOD White Glove Services, customers enjoy a full lifecycle services experience that’s backed by proven expertise from install to operations. Customers benefit from pre-delivery performance certified on NVIDIA’s own acceptance cluster, which validates the deployed system is running at specification before it’s handed off.

White Glove Services also include a dedicated multidisciplinary NVIDIA team that covers everything from installation to infrastructure management to workflow to addressing performance-impacting bottlenecks and optimizations. The services are designed to give IT leaders peace of mind and confidence as they entrust their business to DGX SuperPOD.

DGX SuperPOD at GTC21

To learn more about DGX SuperPOD and how you can consolidate AI infrastructure and centralize development across your enterprise, check out our session presented by Charlie Boyle, vice president and general manager of DGX Systems, who will cover our DGX SuperPOD news and more in two separate sessions at GTC:

Register for GTC, which runs through April 16, for free.

Learn more:

The post Secure AI Data Centers at Scale: Next-Gen DGX SuperPOD Opens Era of Cloud-Native Supercomputing appeared first on The Official NVIDIA Blog.

Read More

XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk

Applying for a home mortgage can resemble a part-time job. But whether consumers are seeking out a home loan, car loan or credit card, there’s an incredible amount of work going on behind the scenes in a bank’s decision — especially if it has to say no.

To comply with an alphabet soup of financial regulations, banks and mortgage lenders have to keep pace with explaining the reasons for rejections to both applicants and regulators.

Busy in this domain, Wells Fargo will present at NVIDIA GTC21 this week some of its latest development work behind this complex decision-making using AI models accelerated by GPUs.

To inform their decisions, lenders have historically applied linear and non-linear regression models for financial forecasting and logistic and survivability models for default risk. These simple, decades-old methods are easy to explain to customers.

But machine learning and deep learning models are reinventing risk forecasting and in the process requiring explainable AI, or XAI, to allow for customer and regulatory disclosures.

Machine learning and deep learning techniques are more accurate but also more complex, which means banks need to spend extra effort to be able to explain decisions to customers and regulators.

These more powerful models allow banks to do a better job understanding the riskiness of loans, and may allow them to say yes to applicants that would have been rejected by a simpler model.

At the same time, these powerful models require more processing, so financial services firms like Wells Fargo are moving to GPU-accelerated models to improve processing, accuracy and explainability, and to provide faster results to consumers and regulators.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help understand the math inside an AI model.

XAI maps out the data inputs with the data outputs of models in a way that people can understand.

“You have all the linear sub-models, and you can see which factor is the most significant — you can see it very clearly,” said Agus Sudjianto, executive vice president and head of Corporate Model Risk at Wells Fargo, explaining his team’s recent work on Linear Iterative Feature Embedding (LIFE) in a research paper.

Wells Fargo XAI Development

The LIFE algorithm was developed to handle high prediction accuracy, ease of interpretation and efficient computation.

LIFE outperforms directly trained single-layer networks, according to Wells Fargo, as well as many other benchmark models in experiments.

The research paper — titled Linear Iterative Feature Embedding: An Ensemble Framework for Interpretable Model — authors include Sudjianto, Jinwen Qiu, Miaoqi Li and Jie Chen.

Default or No Default 

Using LIFE, the bank can generate codes that correlate to model interpretability, offering the right explanations to which variables weighed heaviest in the decision. For example, codes might be generated for high debt-to-income ratio or a FICO score that fell below a set minimum for a particular loan product.

There can be anywhere from 40 to 80 different variables taken into consideration for explaining rejections.

“We assess whether the customer is able to repay the loan. And then if we decline the loan, we can give a reason from a recent code as to why it was declined,” said Sudjianto.

Future Work at Wells Fargo

Wells Fargo is also working on Deep ReLU networks to further its efforts in model explainability. Two of the team’s developers will be discussing research from their paper, Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification, at GTC.

Learn more about the LIFE model work by attending the GTC talk by Jie Chen, managing director for Corporate Model Risk at Wells Fargo. Learn about model work on Deep ReLU Networks by attending the talk by Aijun Zhang, a quantitative analytics specialist at Wells Fargo, and Zebin Yang, a Ph.D. student at Hong Kong University. 

Registration for GTC is free.

Image courtesy of joão vincient lewis on Unsplash

The post XAI Explained at GTC: Wells Fargo Examines Explainable AI for Modeling Lending Risk appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries

NVIDIA technology has been behind some of the world’s most stunning virtual reality experiences.

Each new generation of GPUs has raised the bar for VR environments, producing interactive experiences with photorealistic details to bring new levels of productivity, collaboration and fun.

And with each GTC, we’ve introduced new technologies and software development kits that help developers create extended reality (XR) content and experiences that are more immersive and delightful than ever.

From tetherless streaming with NVIDIA CloudXR to collaborating in a virtual world with NVIDIA Omniverse, our latest technologies are powering the next generation of XR.

This year at GTC, NVIDIA announced a new release for CloudXR that adds support for iOS. We also had announcements with leading cloud service providers to deliver high-quality XR streaming from the cloud. And we released a new version of Variable Rate Supersampling to improve visual performance.

Bringing High Performance and VR Mobility Together

NVIDIA CloudXR is an advanced technology that gives XR users the best of both worlds: the performance of NVIDIA GPUs with the mobility of untethered all-in-one head-mounted displays.

CloudXR is designed to stream all kinds of XR content from any server to any device. Users can easily access powerful, high-quality immersive experiences from anywhere in the world, without being physically connected to a workstation.

From product designers reviewing 3D models to first responders running training simulations, anyone can benefit from CloudXR using Windows and Android devices. We will soon be releasing CloudXR 2.1, which adds support for Apple iOS AR devices, including iPads and iPhones.

Taking XR Streaming to the Cloud

With 5G networks rolling out, streaming XR over 5G from the cloud has the potential to significantly enhance workflows across industries. But the big challenge with delivering XR from the cloud is latency — for people to have a great VR experience, they have to maintain 20ms motion-to-photon latency.

To deliver the best cloud streaming experience, we’ve fine-tuned NVIDIA CloudXR. Over the past six months, we’ve taken great strides to bring CloudXR streaming to cloud service providers, from Amazon Web Services to Tencent.

This year at GTC, we’re continuing this march forward with additional news:

Also at GTC, Google will present a session that showcases CloudXR running on a Google Cloud instance.

To support CloudXR everywhere, we’re adding more client devices to our family.

We’ve worked with Qualcomm Technologies to deliver boundless XR, and with Ericsson on its 5G radio and packet core infrastructure to optimize CloudXR. Hear about the translation of this work to the manufacturing environment at BT’s session in GTC’s XR track.

And we’ve collaborated with Magic Leap on a CloudXR integration, which they will present at GTC. Magic Leap and CloudXR provide a great step forward for spatial computing and an advanced solution that brings many benefits to enterprise customers.

Redefining the XR Experience 

The quality of visuals in a VR experience is critical to provide users with the best visual performance. That’s why NVIDIA developed Variable Rate Supersampling (VRSS), which allows rendering resources to be focused in a foveated region where they’ll have the greatest impact on image quality.

The first VRSS version supported fixed foveated rendering in the center of the screen. The latest version, VRSS 2, integrates dynamic gaze tracking, moving the foveated region where the user is looking.

These advances in XR technology are also paving the way for a solution that allows users to learn, work, collaborate or play with others in a highly realistic immersive environment. The CloudXR iOS integration will soon be available in NVIDIA Omniverse, a collaboration and simulation platform that streamlines 3D production pipelines.

Teams around the world can enter Omniverse and simultaneously collaborate across leading content creation applications in a shared virtual space. With the upcoming CloudXR 2.1 release, Omniverse users can stream specific AR solutions using their iOS tablets and phones.

Expanding XR Workflows at GTC

Learn more about these advances in XR technology at GTC. Register for free and explore over 40 speaker sessions that cover a variety of XR topics, from NVIDIA Omniverse to AI integrations.

Check out the latest XR demos, and get access to an exclusive Connect with Experts session.

And watch a replay of the GTC keynote address by NVIDIA CEO Jensen Huang to catch up on the latest announcements.

Sign up to get news and updates on NVIDIA XR technologies.

Feature image credit: Autodesk VRED.

The post NVIDIA Advances Extended Reality, Unlocks New Possibilities for Companies Across Industries appeared first on The Official NVIDIA Blog.

Read More

GTC Showcases New Era of Design and Collaboration

Breakthroughs in 3D model visualization, such as real-time raytraced rendering and immersive virtual reality, are making architecture and design workflows faster, better and safer.  

At GTC this week, NVIDIA announced the newest advances for the AEC industry with the latest NVIDIA Ampere architecture-based enterprise desktop RTX GPUs, along with an expanded range of mobile laptop GPUs.  

AEC professionals will also want to learn more about NVIDIA Omniverse Enterprise, an open platform for 3D collaboration and physically accurate simulation. 

New RTX GPUs Bring More Power, Performance for AEC 

The NVIDIA RTX A5000 and A4000 GPUs are designed to enhance workflows for architectural design visualization. 

Based on NVIDIA Ampere architecture, the RTX A5000 and A4000 integrate secondgeneration RT Cores to further boost ray tracing , and thirdgeneration Tensor Cores to accelerate AI-powered workflows such as rendering denoising, deep learning super sampling and generative design.  

Several architecture firms, including HNTB, have experienced how the RTX A5000 enhances design workflows.  

“The performance we get from the NVIDIA RTX A5000, even when enabling NVIDIA RTX Global Illumination, is amazing,” said Austin Reed, director of creative media studio​ at HNTB. Having NVIDIA RTX professional GPUs at our designers’ desks at HNTB will enable us to fully leverage RTX technology in our everyday workflows.  

NVIDIA’s new range of mobile laptop GPU models — including the NVIDIA RTX A5000, A4000, A3000 and A2000, and the NVIDIA T1200, T600 and T500  allows AEC professionals to select the perfect GPU for their workloads and budgets.  

With this array of choices, millions of AEC professionals can do their best work from anywhere, even compute-intensive work such as immersive VR for construction rehearsals or point cloud visualization of massive 3D models 

NVIDIA Omniverse Enterprise: A Shared Space for 3D Collaboration  

Architecture firms can now accelerate graphics and simulation workflows with NVIDIA Omniverse Enterprise, the world’s first technology platform that enables global 3D design teams to simultaneously collaborate in a shared virtual space. 

The platform enables organizations to unite their assets and design software tools, so AEC professionals can collaborate on a single project file in real time. 

Powered by NVIDIA RTX technology, Omniverse delivers high-performance and physically accurate simulation for complex 3D scenes like cityscapes, along with real-time ray and pathtraced rendering. Architects and designers can instantly share physically accurate models across teams and devices, accelerating design workflows and reducing the number of review cycles.  

Artists Create Futuristic Renderings with NVIDIA RTX  

Overlapping with GTC, the “Building Utopia” design challenge allowed archviz specialists around the world to discover how NVIDIA RTX real-time rendering is transforming architectural design visualization. 

Our thanks to all the participants who showcased their creativity and submitted short animations they generated using Chaos Vantage running on NVIDIA RTX GPUs. NVIDIA, Lenovo, Chaos Group, KitBash3D and CG Architect are thrilled to announce the winners. 

Congratulations to the winner, Yi Xiang, who receives a Lenovo ThinkPad P15 with an NVIDIA Quadro RTX 5000 GPUIn second place, Cheng Lei will get an NVIDIA Quadro RTX 8000, and in third place, Dariele Polinar will receive an NVIDIA Quadro RTX 6000. 

Image courtesy of Yi Xiang.

Discover More AEC Content at GTC 

Learn more about the newest innovations and all the AEC-focused content at GTC by registering for free 

Check out the latest GTC demos that showcase amazing technologyJoin sessions on NVIDIA Omniverse presented by leading architecture firms like CannonDesignKPF and Woods BagotLearn how companies like The Grid Factory and The Gettys Group are using RTX-powered immersive experiences to accelerate design workflows.  

And be sure to watch a replay of the GTC keynote address by NVIDIA founder and CEO Jensen Huang.

 

Featured image courtesy of KPF – Beijing Century City – 北京世纪城市.

The post GTC Showcases New Era of Design and Collaboration appeared first on The Official NVIDIA Blog.

Read More

NVIDIA, BMW Blend Reality, Virtual Worlds to Demonstrate Factory of the Future

The factories of the future will have a soul — a “digital twin” that blends man and machine in stunning new ways.

In a demo blending reality and virtual reality, robotics and AI, to manage one of BMW’s automotive factories, NVIDIA CEO Jensen Huang Monday rolled out a stunning vision of the future of manufacturing.

“We are working with BMW to create a future factory,” Huang announced during his keynote address at NVIDIA’s GPU Technology Conference before giving his audience a look.

The demo highlights the general availability of NVIDIA Omniverse Enterprise, the first technology platform enabling global 3D design teams to work together simultaneously across multiple software suites in a shared virtual space.

The AI factory demo brings a full suite of NVIDIA technologies on Omniverse, including the NVIDIA Isaac platform for robotics, the NVIDIA EGX edge computing platform and the NVIDIA Aerial software development kit, which brings GPU-accelerated, software-defined 5G wireless radio access networks to the factory floor.

‘The World’s Largest Custom-Manufacturing Company’

Inside the digital twin of BMW’s assembly system, powered by Omniverse, an entire factory in simulation.

Each of BMW’s factory lines can produce up to 10 different cars, and BMW prides itself on giving customers plenty of choices.

There are over 100 options for each car, and more than 40 BMW models. In all, there are 2,100 possible ways to configure a new BMW.

“BMW may very well be the world’s largest custom-manufacturing company,” Huang said.

These vehicles are produced in 31 factories located around the world, explained Milan Nedeljković, member of the Board of Management of BMW AG.

Moving the Parts That Go into the Machines That Move Your Parts

In an instant, Huang and Nedeljković summoned a digital twin of one of BMW’s factories — and the screen was filled with gleaming cars being assembled by banks of perfectly synchronized robots — all simulated.

To design and reconfigure its factories, BMW’s global teams can collaborate in real-time using different software packages like Revit, Catia, or point clouds to design and plan the factory in 3D and all the changes are visible, in real-time, on Omniverse.

“The capability to operate in a perfect simulation revolutionalizes BMW’s planning processes,” Nedeljković said.

Some of that work has to be hands-on. BMW regularly reconfigures our factories to accommodate new vehicle launches. Now, thanks to Omniverse that doesn’t mean workers have to travel.

Nedeljković showed two BMW planning experts located in different parts of the world testing a new line design in Omniverse.

One of them “wormholes” — or travels virtually — into an assembly simulation with a motion capture suit and records task movements.

The other adjusts the line design, in real time.

“They work together to optimize the line as well as worker ergonomics and safety,” Nedeljković said.

The next step: recreating these kinds of interactions, at scale, in simulations, Nedeljković said.

To simulate workflow in Omniverse, digital humans are trained with data from real associates, they’re then used to test new workflows in simulation to plan for worker ergonomics and efficiency.

“That’s exactly why NVIDIA has Digital Human for simulation,” Huang said. “Digital Humans are trained with data from real associates.”

These digital humans can be used in simulations to test new workflows for worker ergonomics and efficiency.

BMW’s 57,000 factory workers share workspace with robots designed to make their jobs easier.

Omniverse, Nedeljković said, will help robots adapt to BMW’s reconfigured factories rapidly.

“With NVIDIA Isaac robotics platform, BMW is deploying a fleet of intelligent robots for logistics to improve the material flow in our production,” Nedeljković said.

That agility is necessary since BMW produces 2.5 million vehicles per year, and 99 percent of them are custom.

Omniverse can tap into NVIDIA Isaac for synthetic data generation and domain randomization, Huang said. That’s key to bootstrapping machine learning.

“Isaac Sim generates millions of relevant synthetic images, and varies the environment to teach robots. ” Huang said.

Domain randomization can generate an infinite permutation of photorealistic objects, textures, orientations, and lighting conditions, Huang said.

“Simulation offers perfect ground truth, whether for detection, segmentation or depth perception,” he added.

Huang and Nedeljković showed a BMW employee monitoring operations in the factory. The operator is able to assign missions to different robots, and see a photorealistic digital win of its progress in Omniverse — all updated by sensors throughout the factory.

With NVIDIA Fleet Command software, workers can securely orchestrate robots, and other devices, in the factory, Huang explained.

They can monitor complex manufacturing cells in real-time, update software over the air, and launch robots in the factory on missions.

Humans can even lend robots a “helping hand.” When an alert is sent to MIssion Control, one of BMW”s human associations can teleoperate the robot — looking through its camera to guide it through a 5G connection.

Then, with a push of a button, the operator returns the robot to autonomous control.

Continuous Improvement, Continually Improving

Omniverse will help BMW reduce planning time and improve flexibility and precision, producing 30% more efficient planning processes.

“NVIDIA Omniverse and NVIDIA AI give us the chance to simulate the 31 factories in our production network,” Nedeljković said.

All the elements of the complete factory model — including the associates, the robots, the buildings, the assembly parts — can be simulated to support a wide range of AI-enabled use cases such as virtual factory planning, autonomous robots, predictive maintenance, big data analytics, he explained.

“These new innovations will reduce the planning times, improve flexibility and precision, and at the end produce 30 percent more efficient planning processes,” Nedeljković said.

The result: a beautifully crafted new car, an amazing machine that’s the product of an amazing machine — a factory able to capture and replicate every motion in the real world to a digital one, and back.

The post NVIDIA, BMW Blend Reality, Virtual Worlds to Demonstrate Factory of the Future appeared first on The Official NVIDIA Blog.

Read More

New Energy Vehicles Power Up with NVIDIA DRIVE

The electric vehicle revolution is about to reach the next level.

Leading startups and EV brands have all announced plans to deliver intelligent vehicles to the mass market beginning in 2022. And these new, clean-energy fleets will achieve AI capabilities for greater safety and efficiency with the high-performance compute of NVIDIA DRIVE.

The car industry has become a technology industry — future cars will be completely programmable with software-driven business models. Companies will offer services and subscriptions over the air for the life of the car.

These new energy vehicles, or NEVs, will kickstart this transition with centralized, software-defined compute that enables continuously improving, cutting-edge AI capabilities.

NEV Newcomers

For some companies, 2022 marks the initial launch of their visionary concepts into production reality.

Canoo unveiled its first vehicle — an all-electric van — in 2019. Now, the startup is on track to deliver an entire line of EVs, including a delivery truck, pickup and sports sedan, to customers starting in 2022.

Canoo’s flagship personal vehicle will leverage NVIDIA DRIVE AGX Xavier for smart driver assistance features. And since the DRIVE AGX platform is open and scalable, Canoo can continue to develop increasingly advanced capabilities through the life of its vehicles.

Also on the horizon is the much anticipated Faraday Future FF91. This premium EV is designed to be an intelligent third living space, with a luxurious interior packed with convenience features powered by NVIDIA DRIVE.

Also charging onto the EV scene is Vinfast, a startup planning to launch a fleet of smart vehicles beginning in 2022. These vehicles will provide industry-leading safety and enhanced autonomy, leveraging the AI compute of NVIDIA DRIVE Xavier, and for proceeding generations, NVIDIA DRIVE Orin.

“NVIDIA is a vital partner for our work in autonomous driving,” said Hung Bui, chief executive of VinAI. “NVIDIA DRIVE delivers the core compute for our vehicles, delivering advanced sensing and other expanding capabilities.”

A Leading Legacy

NIO has announced a supercomputer to power its automated and autonomous driving features, with NVIDIA DRIVE Orin at its core.

The computer, known as Adam, will achieve over 1,000 trillion operations per second of performance with the redundancy and diversity necessary for safe autonomous driving. It also enables personalization in the vehicle, learning from individual driving habits and preferences while continuously improving from fleet data.

The Orin-powered supercomputer will debut in the flagship ET7 sedan, scheduled for production in 2022, and will be in every NIO model to follow.

Breakout EV maker Li Auto will also develop its next generation of electric vehicles using NVIDIA DRIVE AGX Orin. These new vehicles are being developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

This high-performance platform will enable Li Auto to deploy an independent, advanced autonomous driving system with its upcoming fleet.

Xpeng is already putting its advanced driving technology on the road. In March, the automaker completed a six-day cross-country autonomous drive with a fleet of intelligent P7 sedans. The vehicles operated without human intervention using the XPilot 3.0 autonomous driving system, powered by NVIDIA DRIVE AGX Xavier.

Finally, one of the world’s largest automakers, SAIC, is evolving to meet the industry’s biggest technological transformations with two new EV brands packed with advanced AI features.

R-Auto is a family of next-generation vehicles featuring the R-Tech advanced intelligent assistant, powered by NVIDIA DRIVE AGX Orin. R-Tech uses the unprecedented level of compute performance of Orin to run perception, sensor fusion and prediction for automated driving features in real time.

The ultra-premium IM brand is the product of a partnership with etail giant Alibaba. The long-range electric vehicles will feature AI capabilities powered by the high-performance, energy-efficient NVIDIA DRIVE Orin compute platform.

The first two vehicles in the lineup — a flagship sedan and SUV — will have autonomous parking and other automated driving features, as well as a 93kWh battery that comes standard. SAIC will begin taking orders for the sedan at the Shanghai Auto Show later this month, with the SUV following in 2022.

EVs are driving the next decade of transportation. And with NVIDIA DRIVE at the core, these vehicles have the intelligence and performance to go the distance.

The post New Energy Vehicles Power Up with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

NVIDIA CEO Introduces Software, Silicon, Supercomputers ‘for the Da Vincis of Our Time’

Buckle up. NVIDIA CEO Jensen Huang just laid out a singular vision filled with autonomous machines, super-intelligent AIs and sprawling virtual worlds – from silicon to supercomputers to AI software – in a single presentation.

“NVIDIA is a computing platform company, helping to advance the work for the Da Vincis of our time – in language understanding, drug discovery, or quantum computing,” Huang said in a talk delivered from behind his kitchen counter to NVIDIA’s GPU Technology Conference. “NVIDIA is the instrument for your life’s work.”

During a presentation punctuated with product announcements, partnerships, and demos that danced up and down the modern technology stack, Huang spoke about how NVIDIA is investing heavily in CPUs, DPUs, and GPUs and weaving them into new data center scale computing solutions for researchers and enterprises.

He talked about NVIDIA as a software company, offering a host of software built on NVIDIA AI as well as NVIDIA Omniverse for simulation, collaboration, and training autonomous machines.

Finally, Huang spoke about how NVIDIA is moving automotive computing forward with a new SoC, NVIDIA Atlan, and new simulation capabilities.

CPUs, DPUs and GPUs

Huang announced NVIDIA’s first data center CPU, Grace, named after Grace Hopper, a U.S. Navy rear admiral and computer programming pioneer.

Grace is a highly specialized processor targeting largest data intensive HPC and AI applications as the training of next-generation natural-language processing models that have more than one trillion parameters.

When tightly coupled with NVIDIA GPUs, a Grace-based system will deliver 10x faster performance than today’s state-of-the-art NVIDIA DGX-based systems, which run on x86 CPUs.

While the vast majority of data centers are expected to be served by existing CPUs, Gracewill serve a niche segment of computing.“Grace highlights the beauty of Arm,” Huang said.

Huang also announced that the Swiss National Supercomputing Center will build a supercomputer, dubbed Alps, will be powered by Grace and NVIDIA’s next-generation GPU. U.S. Department of Energy’s Los Alamos National Laboratory will also bring a Grace-powered supercomputer online in 2023, NVIDIA announced.

Accelerating Data Centers with BlueField-3

Further accelerating the infrastructure upon which hyperscale data centers, workstations, and supercomputers are built, Huang announced the NVIDIA BlueField-3 DPU.

The next-generation data processing unit will deliver the most powerful software-defined networking, storage and cybersecurity acceleration capabilities.

Where BlueField-2 offloaded the equivalent of 30 CPU cores, it would take 300 CPU cores to secure, offload, and accelerate network traffic at 400 Gbps as BlueField-3— a 10x leap in performance, Huang explained.

‘Three Chips’

Grace and BlueField are key parts of a data center roadmap consisting of 3 chips: CPU, GPU, and DPU, Huang said. Each chip architecture has a two-year rhythm with likely a kicker in between. One year will focus on x86 platforms, the next on Arm platforms.

“Every year will see new exciting products from us,” Huang said. “Three chips, yearly leaps, one architecture.”

Expanding Arm into the Cloud 

Arm, Huang said, is the most popular CPU in the world. “For good reason – it’s super energy-efficient and its open licensing model inspires a world of innovators,” he said.

For other markets like cloud, enterprise and edge data centers, supercomputing, and PC, Arm is just starting. Huang announced key Arm partnerships — Amazon Web Services in cloud computing, Ampere Computing in scientific and cloud computing, Marvel in hyper-converged edge servers, and MediaTek to create a Chrome OS and Linux PC SDK and reference system.

DGX – A Computer for AI

Weaving together NVIDIA silicon and software, Huang announced upgrades to NVIDIA’s DGX Station “AI data center in-a-box” for workgroups, and the NVIDIA DGX SuperPod, NVIDIA’s AI-data-center-as-a-product for intensive AI research and development.

The new DGX Station 320G harnesses 320Gbytes of super-fast HBM2e connected to 4 NVIDIA A100 GPUs over 8 terabytes per second of memory bandwidth. Yet it plugs into a normal wall outlet and consumes just 1500 watts of power, Huang said.

The DGX SuperPOD gets the new 80GB NVIDIA A100, bringing the SuperPOD to 90 terabytes of HBM2e memory. It’s been upgraded with NVIDIA BlueField-2, and NVIDIA is now offering it with the NVIDIA Base Command DGX management and orchestration tool.

NVIDIA EGX for Enterprise 

Further democratizing AI, Huang introduced a new class of NVIDIA-certified systems, high-volume enterprise servers from top manufacturers. They’re now certified to run the NVIDIA AI Enterprise software suite, exclusively certified for VMware vSphere 7, the world’s most widely used compute virtualization platform.

Expanding the NVIDIA-certified servers ecosystem is a new wave of systems featuring the NVIDIA A30 GPU for mainstream AI and data analytics and the NVIDIA A10 GPU for AI-enabled graphics, virtual workstations and mixed compute and graphics workloads, announced today.

AI-on-5G

Huang also discussed NVIDIA’s AI-on-5G computing platform – bringing together 5G and AI into a new type of computing platform designed for the edge that pairs the NVIDIA Aerial software development kit with the NVIDIA BlueField-2 A100, combining GPUs and CPUs into “the most advanced PCIE card ever created.”

Partners Fujitsu, Google Cloud, Mavenir, Radisys and Wind River are all developing solutions for NVIDIA’s AI-on-5G platform.

NVIDIA AI and NVIDIA Omniverse

Virtual, real-time, 3d worlds inhabited by people, AIs, and robots are no longer science-fiction.

NVIDIA Omniverse is cloud-native, scalable to multiple GPUs, physically accurate, takes advantage of RTX real-time path tracing and DLSS, simulates materials with NVIDIA MDL, simulates physics with NVIDIA PhysX, and fully integrates NVIDIA AI, Huang explained.

“Omniverse was made to create shared virtual 3D worlds,” Huang said. “Ones not unlike the science fiction metaverse described by Neal Stephenson in his early 1990s novel ‘Snow Crash’”

Huang announced that starting this summer, Omniverse will be available for enterprise licensing. Since its release in open beta partners such as Foster and Partners in architecture, ILM in entertainment, Activision in gaming, and advertising powerhouse WPP have put Omniverse to work.

The Factory of the Future

To show what’s possible with Omniverse Huang, along with Milan Nedeljković, member of the Board of Management of BMW AG, showed how a photorealistic, real-time digital model — a “digital twin” of one of BMW’s highly-automated factories — can accelerate modern manufacturing.

“These new innovations will reduce the planning times, improve flexibility and precision and at the end produce 30 percent more efficient planning,” Nedeljković said.

A Host of AI Software

Huang announced NVIDIA Megatron — a framework for training Transformers, which have led to breakthroughs in natural-language processing. Transformers generate document summaries, complete phrases in email, grade quizzes, generate live sports commentary, even code.

He detailed new models for Clara Discovery — NVIDIA’s acceleration libraries for computational drug discovery, and a partnership with Schrodinger — the leading physics-based and machine learning computational platform for drug discovery and material science.

To accelerate research into quantum computing — which relies on quantum bits, or qubits, that can be 0, 1, or both — Huang introduced cuQuantum to accelerate quantum circuit simulators so researchers can design better quantum computers.

To secure modern data centers, Huang announced NVIDIA Morpheus – a data center security platform for real-time all-packet inspection built on NVIDIA AI, NVIDIA BlueField, Net-Q network telemetry software, and EGX.

To accelerate conversational AI, Huang announced the availability of NVIDIA Jarvis – a state-of-the-art deep learning AI for speech recognition, language understanding, translations, and expressive speech.

To accelerate recommender systems — the engine for search, ads, online shopping, music, books, movies, user-generated content, and news — Huang announced NVIDIA Merlin is now available on NGC, NVIDIA’s catalog of deep learning framework containers.

And to help customers turn their expertise into AI, Huang introduced NVIDIA TAO to fine-tune and adapt NVIDIA pre-trained models with data from customers and partners while protecting data privacy.

“There is infinite diversity of application domains, environments, and specializations,” Huang said. “No one has all the data – sometimes it’s rare, sometimes it’s a trade secret.

The final piece is the inference server, NVIDIA Triton, to glean insights from the continuous streams of data coming into customer’s EGX servers or cloud instances, Huang said.

‘Any AI model that runs on cuDNN, so basically every AI model,” Huang said. “From any framework – TensorFlow, Pytorch, ONNX, OpenVINO, TensorRT, or custom C++/python backends.”

Advancing Automotive with NVIDIA DRIVE

Autonomous vehicles are “one of the most intense machine learning and robotics challenges – one of the hardest but also with the greatest impact,” Huang said.

NVIDIA is building modular, end-to-end solutions for the $10 trillion transportation industry so partners can leverage the parts they need.

Huang said NVIDIA DRIVE Orin, NVIDIA’s AV computing system-on-a-chip, which goes into production in 2022, was designed to be the car’s central computer.

Volvo Cars has been using the high-performance, energy-efficient compute of NVIDIA DRIVE since 2016 and developing AI-assisted driving features for new models on NVIDIA DRIVE Xavier with software developed in-house and by Zenseact, Volvo Cars’ autonomous driving software development company.

And Volvo Cars announced during the GTC keynote today that it will use NVIDIA DRIVE Orin to power the autonomous driving computer in its next-generation cars.

The decision deepens the companies’ collaboration to even more software-defined model lineups, beginning with the next-generation XC90, set to debut next year.

Meanwhile, NVIDIA DRIVE Atlan, NVIDIA’s next-generation automotive system-on-a-chip, and a true data center on wheels, “will be yet another giant leap,” Huang announced.

Atlan will deliver more than 1,000 trillion operations per second, or TOPS, and targets 2025 models.

“Atlan will be a technical marvel – fusing all of NVIDIA’s technologies in AI, auto, robotics, safety, and BlueField secure data centers,” Huang said.

Huang also announced the NVIDIA 8th generation Hyperion car platform – including reference sensors, AV and central computers, 3D ground-truth data recorders, networking, and all of the essential software.

Huang also announced that DRIVE Sim will be available for the community this summer.

Just as Omniverse can build a digital twin of the factories that produce cars, DRIVE Sim can be used to create a digital twin of autonomous vehicles to be used throughout AV development.

“The DRIVE digital twin in Omniverse is a virtual space that every engineer and every car in the fleet is connected to,” Huang said.

The ‘Instrument for Your Life’s Work’

Huang wrapped up with four points.

NVIDIA is now a 3-chip company – offering GPUs, CPUs, and DPUs.

NVIDIA is a software platform company and is dedicating enormous investment in NVIDIA AI and NVIDIA Omniverse.

NVIDIA is an AI company with Megatron, Jarvis, Merlin, Maxine, Isaac, Metropolis, Clara, and DRIVE, and pre-trained models you can customize with TAO.

NVIDIA is expanding AI with DGX for researchers, HGX for cloud, EGX for enterprise and 5G edge, and AGX for robotics.

“Mostly,” Huang said. “NVIDIA is the instrument for your life’s work.”

The post NVIDIA CEO Introduces Software, Silicon, Supercomputers ‘for the Da Vincis of Our Time’ appeared first on The Official NVIDIA Blog.

Read More