Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse

Siemens Energy, a leading supplier of power plant technology in the trillion-dollar worldwide energy market, is relying on the NVIDIA Omniverse platform to create digital twins to support predictive maintenance of power plants.

In doing so, Siemens Energy joins a wave of companies across various industries that are using digital twins to enhance their operations. Among them, BMW Group, which has 31 factories around the world, is building multiple industrial digital twins of its operations; and Ericsson is adopting Omniverse to build digital twins of urban areas to help determine how to construct 5G networks.

Indeed, the worldwide market for digital twin platforms is forecast to reach $86 billion by 2028, according to Grand View Research.

“NVIDIA’s open platforms along with physics-infused neural networks bring great value to Siemens Energy,” said Stefan Lichtenberger, technical portfolio manager at Siemens Energy.

Siemens Energy builds and services combined cycle power plants, which include large gas turbines and steam turbines. Heat recovery steam generators (HRSGs) use the exhaust heat from the gas turbine to create steam used to drive the steam turbine. This improves the thermodynamic efficiency of the power plant by more than 60 percent, according to Siemens Energy.

At some sections of an HRSG, a steam and water mixture can cause corrosion that might impact the lifetime of the HRSG’s parts. Downtime for maintenance and repairs leads to lost revenue opportunities for utility companies.

Siemens Energy estimates that a 10 percent reduction in the industry’s average planned downtime of 5.5 days for HRSGs — required among others to check wall loss thickness of pipes due to corrosion —  would save $1.7 billion a year.

Simulations for Industrial Applications

Siemens Energy is enlisting NVIDIA technology to develop a new workflow to reduce the frequency of planned shutdowns while maintaining safety. Real-time data — water inlet temperature, pressure, pH, gas turbine power and temperature — is preprocessed to compute pressure, temperature and velocity of both water and steam. The pressure, temperature and velocity are fed into a physics-ML model created with the NVIDIA Modulus framework to simulate precisely how steam and water flow through the pipes in real time.

The flow conditions in the pipes are then visualized with NVIDIA Omniverse, a virtual world simulation and collaboration platform for 3D workflows. Omniverse scales across multi-GPUs to help Siemens Energy understand and predict the aggregated effects of corrosion in real time.

Accelerating Digital Twin Development

Using NVIDIA software frameworks, running on NVIDIA A100 Tensor Core GPUs, Siemens Energy is simulating the corrosive effects of heat, water and other conditions on metal over time to fine-tune maintenance needs. Predicting maintenance more accurately with machine learning models can help reduce the frequency of maintenance checks without running the risk of failure. The scaled Modulus PINN model was run on AWS Elastic Kubernetes Service (EKS) backed by P4d EC2 instances with A100 GPUs.

Building computational fluid dynamics models for each HRSG, takes as long as eight weeks each to estimate corrosion within pipes at HRSGs plants. This process is required for a portfolio of more than 600 units. Faster workflow using NVIDIA technologies can enable Siemens Energy to accelerate corrosion estimation from weeks to hours.

NVIDIA Omniverse provides a highly scalable platform that lets Siemens Energy replicate and deploy digital twins worldwide, accessing potentially thousands of NVIDIA GPUs as needed.

“NVIDIA’s work as the pioneer in accelerated computing, AI software platforms and simulation offer the scale and flexibility needed for industrial digital twins at Siemen Energy,” said Lichtenberger.

Learn more about Omniverse for virtual simulations and digital twins.

The post Siemens Energy Taps NVIDIA to Develop Industrial Digital Twin of Power Plant in Omniverse appeared first on The Official NVIDIA Blog.

Read More

Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies

Two simulations of a billion atoms, two fresh insights into how the SARS-CoV-2 virus works, and a new AI model to speed drug discovery.

Those are results from finalists for Gordon Bell awards, considered a Nobel prize in high performance computing. They used AI, accelerated computing or both to advance science with NVIDIA’s technologies.

A finalist for the special prize for COVID-19 research used AI to link multiple simulations, showing at a new level of clarity how the virus replicates inside a host.

The research — led by Arvind Ramanathan, a computational biologist at the Argonne National Laboratory — provides a way to improve the resolution of traditional tools used to explore protein structures. That could provide fresh insights into ways to arrest the spread of a virus.

The team, drawn from a dozen organizations in the U.S. and the U.K., designed a workflow that ran across systems including Perlmutter, an NVIDIA A100-powered system, built by Hewlett Packard Enterprise, and Argonne’s NVIDIA DGX A100 systems.

“The capability to perform multisite data analysis and simulations for integrative biology will be invaluable for making use of large experimental data that are difficult to transfer,” the paper said.

As part of its work, the team developed a technique to speed molecular dynamics research using the popular NAMD program on GPUs. They also leveraged NVIDIA NVLink to speed data “far beyond what is currently possible with a conventional HPC network interconnect, or … PCIe transfers.”

A Billion Atoms in High Fidelity

Ivan Oleynik, a professor of physics at the University of South Florida, led a team named a finalist for the standard Gordon Bell award for their work producing the first highly accurate simulation of a billion atoms. It broke by 23x a record set by a Gordon Bell winner last year.

“It’s a joy to uncover phenomena never seen before, it’s a really big achievement we’re proud of,” said Oleynik.

The simulation of carbon atoms under extreme temperature and pressure could open doors to new energy sources and help describe the makeup of distant planets. It’s especially stunning because the simulation has quantum-level accuracy, faithfully reflecting the forces among the atoms.

“It’s accuracy we could only achieve by applying machine learning techniques on a powerful GPU supercomputer — AI is creating a revolution in how science is done,” said Oleynik.

The team exercised 4,608 IBM Power AC922 servers and 27,900 NVIDIA GPUs on the U.S. Department of Energy’s Summit supercomputer, built by IBM, one of the world’s most powerful supercomputers. It demonstrated their code could scale with almost 100-percent efficiency to simulations of 20 billion atoms or more.

That code is available to any researcher who wants to push the boundaries of materials science.

Inside a Deadly Droplet

In another billion-atom simulation, a second finalist for the COVID-19 prize showed the Delta variant in an airborne droplet (below). It reveals biological forces that spread COVID and other diseases, providing a first atomic-level look at aerosols.

The work has “far reaching … implications for viral binding in the deep lung, and for the study of other airborne pathogens,” according to the paper from a team led by last year’s winner of the special prize, researcher Rommie Amaro from the University of California San Diego.

Gordon Bell finalist COVID droplet simulation
The team led by Amaro simulated the Delta SARS-CoV-2 virus in a respiratory droplet with more than a billion atoms.

“We demonstrate how AI coupled to HPC at multiple levels can result in significantly improved effective performance, enabling new ways to understand and interrogate complex biological systems,” Amaro said.

Researchers used NVIDIA GPUs on Summit, the Longhorn supercomputer built by Dell Technologies for the Texas Advanced Computing Center and commercial systems in Oracle Cloud Infrastructure (OCI).

“HPC and cloud resources can be used to significantly drive down time-to-solution for major scientific efforts as well as connect researchers and greatly enable complex collaborative interactions,” the team concluded.

The Language of Drug Discovery

Finalists for the COVID prize at Oak Ridge National Laboratory (ORNL) applied natural language processing (NLP) to the problem of screening chemical compounds for new drugs.

They used a dataset containing 9.6 billion molecules — the largest dataset applied to this task to date — to train in two hours a BERT NLP model that can speed discovery of new drugs. Previous best efforts took four days to train a model using a dataset with 1.1 billion molecules.

The work exercised more than 24,000 NVIDIA GPUs on the Summit supercomputer to deliver a whopping 603 petaflops. Now that the training is done, the model can run on a single GPU to help researchers find chemical compounds that could inhibit COVID and other diseases.

“We have collaborators here who want to apply the model to cancer signaling pathways,” said Jens Glaser, a computational scientist at ORNL.

“We’re just scratching the surface of training data sizes — we hope to use a trillion molecules soon,” said Andrew Blanchard, a research scientist who led the team.

Relying on a Full-Stack Solution

NVIDIA software libraries for AI and accelerated computing helped the team complete its work in what one observer called a surprisingly short time.

“We didn’t need to fully optimize our work for the GPU’s tensor cores because you don’t need specialized code, you can just use the standard stack,” said Glaser.

He summed up what many finalists felt: “Having a chance to be part of meaningful research with potential impact on people’s lives is something that’s very satisfying for a scientist.”

Tune in to our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Gordon Bell Finalists Fight COVID, Advance Science With NVIDIA Technologies appeared first on The Official NVIDIA Blog.

Read More

Universities Expand Research Horizons with NVIDIA Systems, Networks

Just as the Dallas/Fort Worth airport became a hub for travelers crisscrossing America, the north Texas region will be a gateway to AI if folks at Southern Methodist University have their way.

SMU is installing an NVIDIA DGX SuperPOD, an accelerated supercomputer it expects will power projects in machine learning for its sprawling metro community with more than 12,000 students and 2,400 faculty and staff.

It’s one of three universities in the south-central U.S. announcing plans to use NVIDIA technologies to shift research into high gear.

Texas A&M and Mississippi State University are adopting NVIDIA Quantum-2, our 400 Gbit/second InfiniBand networking platform, as the backbone for their latest high-performance computers. In addition, a supercomputer in the U.K. has upgraded its InfiniBand network.

Texas Lassos a SuperPOD

“We’re the second university in America to get a DGX SuperPOD and that will put this community ahead in AI capabilities to fuel our degree programs and corporate partnerships,” said Michael Hites, chief information officer of SMU, referring to a system installed earlier this year at the University of Florida.

A September report called the Dallas area “hobbled” by a lack of major AI research. Ironically, the story hit the local newspaper just as SMU was buttoning up its plans for its DGX SuperPOD.

Previewing its initiative, an SMU report in March said AI is “at the heart of digital transformation … and no sector of society will remain untouched” by the technology. “The potential for dramatic improvements in K-12 education and workforce development is enormous and will contribute to the sustained economic growth of the region,” it added.

SMU Ignite, a $1.5 billion fundraiser kicked off in September, will fuel the AI initiative, helping propel Southern Methodist into the top ranks of university research nationally. The university is hiring a chief innovation officer to help guide the effort.

Crafting a Computational Crucible

It’s all about the people, says Jason Warner, who manages the IT teams that support SMU’s researchers. So, he hired a seminal group of data science specialists to staff a new center at SMU’s Ford Hall for Research and Innovation, a hub Warner calls SMU’s “computational crucible.”

Eric Godat leads that team. He earned his Ph.D. in particle physics at SMU modeling nuclear structure using data from the Large Hadron Collider.

Now he’s helping fire up SMU’s students about opportunities on the DGX SuperPOD. As a first step, he asked two SMU students to build a miniature model of a DGX SuperPOD using NVIDIA Jetson modules.

“We wanted to give people — especially those in nontechnical fields who haven’t done AI — a sense of what’s coming,” Godat said.

SMU's Jetson SuperPOD
SMU undergrad Connor Ozenne helped build a miniature DGX SuperPOD that was featured in SMU’s annual report. It uses 16 Jetson modules in a cluster students will benchmark as if it were a TOP500 system.

The full-sized supercomputer, made up of 20 NVIDIA DGX A100 systems on an NVIDIA Quantum InfiniBand network, could be up and running as early as January thanks to its Lego-like, modular architecture. It will deliver a whopping 100 petaflops of computing power, enough to give it a respectable slot on the TOP500 list of the world’s fastest supercomputers.

Aggies Tap NVIDIA Quantum-2 InfiniBand for ACES

About 200 miles south, the high performance computing center at Texas A&M will be among the first to plug into the NVIDIA Quantum-2 InfiniBand platform. Its ACES supercomputer, built by Dell Technologies, will use the 400G InfiniBand network to connect researchers to a mix of five accelerators from four vendors.

NVIDIA Quantum-2 ensures “that a single job on ACES can scale up using all the computing cores and accelerators.  Besides the obvious 2x jump in throughput from NVIDIA Quantum-1 InfiniBand at 200G, it will provide improved total cost of ownership, beefed up in-network computing features and increased scaling,” said Honggao Liu, ACES’s principal investigator and project director.

Texas A&M already gives researchers access to accelerated computing in four systems that include more than 600 NVIDIA A100 Tensor Core and prior-generation GPUs. Two of the four systems use an earlier version of NVIDIA’s InfiniBand technology.

MSU Rides a 400G Train

Mississippi State University will also tap the NVIDIA Quantum-2 InfiniBand platform. It’s the network of choice for a new system that supplements Orion, the largest of four clusters MSU manages, all using earlier versions of InfiniBand.

Both Orion and the new system are funded by the U.S. National Oceanic and Atmospheric Administration (NOAA) and built by Dell. They conduct work for NOAA’s missions as well as research for MSU.

Orion was listed as the fourth largest academic supercomputer in America when it debuted on the TOP500 list in June 2019.

“We’re using InfiniBand in four generations of supercomputers here at MSU so we know it’s both powerful and mature to run our big jobs reliably,” said Trey Breckenridge, director of high performance computing at MSU.

“We’re adding a new system with NVIDIA Quantum-2 to stay at the leading edge in HPC,” he added.

Quantum Nets Cover the UK

Across the pond in the U.K., the Data Intensive supercomputer at the University of Leicester, known as the DIaL system, has upgraded to NVIDIA Quantum, the 200G version of InfiniBand.

“DIaL is specifically designed to tackle the complex, data-intensive questions which must be answered to evolve our understanding of the universe around us,” said Mark Wilkinson, professor of theoretical astrophysics at the University of Leicester and director of its HPC center.

“The intense requirements of these specialist workloads rely on the unparalleled bandwidth and latency that only InfiniBand can provide to make the research possible,” he said.

DIaL is one of four supercomputers in the U.K.’s DiRAC facility using InfiniBand, including the Tursa system at the University of Edinburgh.

InfiniBand Shines in Evaluation

In a technical evaluation, researchers found Tursa with NVIDIA GPU accelerators on a Quantum network delivered 5x the performance of their CPU-only Tesseract system using an alternative interconnect.

Application benchmarks show 16 nodes of Tursa have twice the performance of 512 nodes of Tesseract. Tursa delivers 10 teraflops/node using 90 percent of the network’s bandwidth at a significant improvement in performance per kilowatt over Tesseract.

It’s another example of why most of the world’s TOP500 systems are using NVIDIA technologies.

For more, watch our special address at SC21 either live on Monday, Nov. 15 at 3 pm PST or later on demand. NVIDIA’s Marc Hamilton will provide an overview of our latest news, innovations and technologies, followed by a live Q&A panel with NVIDIA experts.

The post Universities Expand Research Horizons with NVIDIA Systems, Networks appeared first on The Official NVIDIA Blog.

Read More

NVIDIA GTC Sees Spike in Developers From Africa

The appetite for AI and data science is increasing, and nowhere is that more prevalent than in emerging markets.

Registrations for this week’s GTC from African nations tripled compared with the spring edition of the event.

Indeed, Nigeria had the third most registered attendees for countries in the EMEA region, ahead of France, Italy and Spain. Five other African nations were among the region’s top 15 for registrants: Egypt (No. 6), Tunisia (No. 7), Ghana (No. 9), South Africa (No. 11) and Kenya (No. 12)

The numbers demonstrate the growing interest among Africa-based developers to access content, information and expertise centered around AI, data science, robotics and high performance computing. Developers are using these technologies as a platform to create innovative applications that address local challenges, such as healthcare and climate change.

Global Conference, Localized Content

Among the speakers at GTC were several members of NVIDIA Emerging Chapters, a new program that enables local communities in emerging economies to build and scale their AI, data science and graphics projects. Such highly localized content empowers developers from these areas and raises awareness of their unique challenges and needs.

For example, tinyML Kenya, a community of machine learning researchers and practitioners, spoke on the impacts of healthcare, education, conservation and climate change as a force for good in emerging markets. Zindi, Africa’s first data science competition platform, participated in a session about bridging the AI education gap among developers, IT professionals and students on the continent.

Multiple African organizations and universities also spoke at GTC about how developers in the region and emerging markets are using AI to build innovations that address local challenges. Among them were Kenya Adanian Labs, Cadi Ayyad University of Morocco, Data Science Africa, Python Ghana, and Nairobi Women in Machine Learning & Data Science.

Several Africa-based members of NVIDIA Inception, a free program designed to empower cutting-edge startups, spoke about the AI revolution underway in the continent and other emerging areas. Cyst.ai, minoHealth, Fastagger and ARMA were among the 70+ Inception startups who presented at the conference.

AI was not the only innovation topic for local developers. The top African gaming and animation companies Usiku Games, Leti Arts, NETINFO 3D and HeroSmashers TV also joined the party to discuss how the continent’s burgeoning gaming industry continues to thrive and the tools game developers need to be successful in an area of the world where access to compute resources is often limited.

Engaging Developers Everywhere

While AI developers and startup founders come from all over the world, developers in emerging areas face unique circumstances and opportunities. This means global representation and localized access become even more important to bolster developer ecosystems in emerging markets.

Through NVIDIA Emerging Chapters, grassroots organizations and communities can provide developers access to the NVIDIA Developer Program and course credits for the NVIDIA Deep Learning Institute, helping bridge new paths to AI development in the region.

Learn more about AI in emerging markets today.

Watch NVIDIA CEO Jensen Huang’s GTC keynote address:

The post NVIDIA GTC Sees Spike in Developers From Africa appeared first on The Official NVIDIA Blog.

Read More

NVIDIA to Build Earth-2 Supercomputer to See Our Future

The earth is warming. The past seven years are on track to be the seven warmest on record. The emissions of greenhouse gases from human activities are responsible for approximately 1.1°C of average warming since the period 1850-1900.

What we’re experiencing is very different from the global average. We experience extreme weather — historic droughts, unprecedented heatwaves, intense hurricanes, violent storms and catastrophic floods. Climate disasters are the new norm.

We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades. It’s hard to mobilize action for something so far in the future. But we must know our future today — see it and feel it — so we can act with urgency.

To make our future a reality today, simulation is the answer.

To develop the best strategies for mitigation and adaptation, we need climate models that can predict the climate in different regions of the globe over decades.

Unlike predicting the weather, which primarily models atmospheric physics, climate models are multidecade simulations that model the physics, chemistry and biology of the atmosphere, waters, ice, land and human activities.

Climate simulations are configured today at 10- to 100-kilometer resolutions.

But greater resolution is needed to model changes in the global water cycle — water movement from the ocean, sea ice, land surface and groundwater through the atmosphere and clouds. Changes in this system lead to intensifying storms and droughts.

Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available. It would take decades to achieve that through the ordinary course of computing advances, which accelerate 10x every five years.

For the first time, we have the technology to do ultra-high-resolution climate modeling, to jump to lightspeed and predict changes in regional extreme weather decades out.

We can achieve million-x speedups by combining three technologies: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers, along with vast quantities of observed and model data to learn from.

And with super-resolution techniques, we may have within our grasp the billion-x leap needed to do ultra-high-resolution climate modeling. Countries, cities and towns can get early warnings to adapt and make infrastructures more resilient. And with more accurate predictions, people and nations will act with more urgency.

So, we will dedicate ourselves and our significant resources to direct NVIDIA’s scale and expertise in computational sciences, to join with the world’s climate science community.

NVIDIA this week revealed plans to build the world’s most powerful AI supercomputer dedicated to predicting climate change. Named Earth-2, or E-2, the system would create a digital twin of Earth in Omniverse.

The system would be the climate change counterpart to Cambridge-1, the world’s most powerful AI supercomputer for healthcare research. We unveiled Cambridge-1 earlier this year in the U.K. and it’s being used by a number of leading healthcare companies.

All the technologies we’ve invented up to this moment are needed to make Earth-2 possible. I can’t imagine a greater or more important use.

The post NVIDIA to Build Earth-2 Supercomputer to See Our Future appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration

For millions of professionals around the world, 3D workflows are essential.

Everything they build, from cars to products to buildings, must first be designed or simulated in a virtual world. At the same time, more organizations are tackling complex designs while adjusting to a hybrid work environment.

As a result, design teams need a solution that helps them improve remote collaboration while managing 3D production pipelines. And NVIDIA Omniverse is the answer.

NVIDIA Omniverse Enterprise, now available, helps professionals across industries transform complex 3D design workflows. The groundbreaking platform lets global teams working across multiple software suites collaborate in real time in a shared virtual space.

Designed for the Present, Built for the Future

With Omniverse Enterprise, professionals gain new capabilities to boost traditional visualization workflows. It’s a newly launched subscription that brings fully supported software to 3D organizations of any scale.

The foundation of Omniverse is Pixar’s Universal Scene Description, an open-source file format that enables users to enhance their design process with real-time interoperability across applications. Additionally, the platform is built on NVIDIA RTX technology, so creators can render faster, do multiple iterations at no opportunity cost, and quickly achieve their final designs with stunning, photorealistic detail.

Ericsson, a leading telecommunications company, is using Omniverse Enterprise to create a digital twin of a 5G radio network to simulate and visualize signal propagation and performance. Within Omniverse, Ericsson has built a true-to-reality city-scale simulation environment, bringing in scenes, models and datasets from Esri CityEngine.

A New Experience for 3D Design

Omniverse Enterprise is available worldwide through global computer makers BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro. Many companies have already experienced the advanced capabilities of the platform.

Epigraph, a leading provider for companies such as Black & Decker, Yamaha and Wayfair, creates physically accurate 3D assets and product experiences for e-commerce. BOXX Technologies helped Epigraph achieve faster rendering with Omniverse Enterprise and NVIDIA RTX A6000 graphics. The advanced RTX Renderer in Omniverse enabled Epigraph to render images at final-frame quality faster, while significantly reducing the amount of computational resources needed.

Media.Monks is exploring ways to enhance and extend their workflows in a virtual world with Omniverse Enterprise, together with HP. The combination of remote computing and collocated workstations enables the Media.Monks design, creative and solutions teams to accelerate their clients’ digital transformation toward a more decentralized future. In collaboration with NVIDIA and HP, Media.Monks is exploring new approaches and the convergence of collaboration, real-time graphics, and live broadcast for a new era of brand virtualization.

Dell Technologies is presenting at GTC to show how Omniverse is advancing the hybrid workforce with Dell Precision workstations, Dell EMC PowerEdge servers and Dell Technologies Validated Designs. The interactive panel discussion will dive into why users need Omniverse today, and how Dell is helping more professionals adopt this solution, from the desktop to the data center.

And Lenovo is showcasing how advanced technologies like Omniverse are making remote collaboration seamless. Whether it’s connecting to a powerful mobile workstation on the go, a physical workstation back in the office, or a virtual workstation in the data center, Lenovo, TGX and NVIDIA are providing remote workers with the same experience they get at the office.

These systems manufacturers have also enabled other Omniverse Enterprise customers such as Kohn Pedersen Fox, Woods Bagot and WPP to improve their efficiency and productivity with real-time collaboration.

Experience Virtual Worlds With NVIDIA Omniverse

NVIDIA Omniverse Enterprise is now generally available by subscription from BOXX Technologies, Dell Technologies, HP, Lenovo and Supermicro.

The platform is optimized and certified to run on NVIDIA RTX professional mobile workstations and NVIDIA-Certified Systems, including desktops and servers on the NVIDIA EGX platform.

With Omniverse Enterprise, creative and design teams can connect their Autodesk 3ds Max, Maya and Revit, Epic Games’ Unreal Engine, McNeel & Associates Rhino, Grasshopper and Trimble SketchUp workflows through live-edit collaboration. Learn more about NVIDIA Omniverse Enterprise and our 30-day evaluation program. For individual artists, there’s also a free beta version of the platform available for download.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address below:

The post NVIDIA Omniverse Enterprise Delivers the Future of 3D Design and Real-Time Collaboration appeared first on The Official NVIDIA Blog.

Read More

Catch Some Rays This GFN Thursday With ‘Jurassic World Evolution 2’ and ‘Bright Memory: Infinite’ Game Launches

This week’s GFN Thursday packs a prehistoric punch with the release of Jurassic World Evolution 2. It also gets infinitely brighter with the release of Bright Memory: Infinite.

Both games feature NVIDIA RTX technologies and are part of the six titles joining the GeForce NOW library this week.

GeForce NOW RTX 3080 members will get the peak cloud gaming experience in these titles and more. In addition to RTX ON, they’ll stream both games at up to 1440p and 120 frames per second on PC and Mac; and up to 4K on SHIELD.

Preorders for six-month GeForce NOW RTX 3080 memberships are currently available in North America and Europe for $99.99. Sign up today to be among the first to experience next-generation gaming.

The Latest Tech, Streaming From the Cloud

GeForce RTX GPUs give PC gamers the best visual quality and highest frame rates. They also power NVIDIA RTX technologies. And with GeForce RTX 3080-class GPUs making their way to the cloud in the GeForce NOW SuperPOD, the most advanced platform for ray tracing and AI is now available across nearly any low-powered device.

GeForce NOW SuperPOD
The next generation of cloud gaming is powered by the GeForce NOW SuperPOD, built on the second-gen RTX, NVIDIA Ampere architecture.

Real-time ray tracing creates the most realistic and immersive graphics in supported games, rendering environments in cinematic quality. NVIDIA DLSS gives games a speed boost with uncompromised image quality, thanks to advanced AI.

With GeForce NOW’s Priority and RTX 3080 memberships, gamers can take advantage of these features in numerous top games, including new releases like Jurassic World Evolution 2 and Bright Memory: Infinite.

The added performance from the latest generation of NVIDIA GPUs also means GeForce NOW RTX 3080 members have exclusive access to stream at up to 1440p at 120 FPS on PC, 1600p at 120 FPS on most MacBooks, 1440p at 120 FPS on most iMacs, 4K HDR at 60 FPS on NVIDIA SHIELD TV and up to 120 FPS on select Android devices.

Welcome to …

Immerse yourself in a world evolved in a compelling, original story, experience the chaos of “what-if” scenarios from the iconic Jurassic World and Jurassic Park films and discover over 75 awe-inspiring dinosaurs, including brand-new flying and marine reptiles. Play with support for NVIDIA DLSS this week on GeForce NOW.

GeForce NOW gives your low-end rig the power to play Jurassic World Evolution 2 with even higher graphics settings thanks to NVIDIA DLSS, streaming from the cloud.

Blinded by the (Ray-Traced) Light

FYQD-studio, a one-man development team that released Bright Memory in 2020, is back with a full-length sequel, Bright Memory: Infinite, streaming from the cloud with RTX ON.

Bright Memory: Infinite combines the FPS and action genres with dazzling visuals, amazing set pieces and exciting action. Mix and match available skills and abilities to unleash magnificent combos on enemies. Cut through the opposing forces with your sword, or lock and load with ranged weaponry, customized with a variety of ammunition. The choice is yours.

Priority and GeForce NOW RTX 3080 members can experience every moment of the action the way FYQD-studio intended, gorgeously rendered with ray-traced reflections, ray-traced shadows, ray-traced caustics and dazzling RTX Global Illumination. And GeForce NOW RTX 3080 members can play at up to 1440p and 120 FPS on PC and Mac.

Never Run Out of Gaming

GFN Thursday always means more games.

Members can find these six new games streaming on the cloud this week:

  • Bright Memory: Infinite (new game launch on Steam)
  • Epic Chef (new game launch on Steam)
  • Jurassic World Evolution 2 (new game on launch on Steam and Epic Games Store)
  • MapleStory (Steam)
  • Severed Steel (Steam)
  • Tale of Immortal (Steam)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post Catch Some Rays This GFN Thursday With ‘Jurassic World Evolution 2’ and ‘Bright Memory: Infinite’ Game Launches appeared first on The Official NVIDIA Blog.

Read More

How Researchers Use NVIDIA AI to Help Mitigate Misinformation

Researchers tackling the challenge of visual misinformation — think the TikTok video of Tom Cruise supposedly golfing in Italy during the pandemic — must continuously advance their tools to identify AI-generated images.

NVIDIA is furthering this effort by collaborating with researchers to support the development and testing of detector algorithms on our state-of-the-art image-generation models.

By crafting a dataset of highly realistic images with StyleGAN3 — our latest, state-of-the-art media generation algorithm — NVIDIA provided crucial information to researchers testing how well their detector algorithms work when tested on AI-generated images created by previously unseen techniques. These detectors help experts identify and analyze synthetic images to combat visual misinformation.

At this week’s NVIDIA GTC, this work was shared in a session titled “Alias-Free Generative Adversarial Networks,” which provided an overview of StyleGAN3. To watch on demand,  register free for GTC.

“This has been a unique situation in that people doing image generation detection have worked closely with the people at NVIDIA doing image generation,” said Edward Delp, a professor at Purdue University and principal investigator of one of the research teams. “This collaboration with NVIDIA has allowed us to build even better and more robust detectors. The ‘early access’ approach used by NVIDIA is an excellent way to further forensics research.”

Advancing Media Forensics With StyleGAN3 Images

When researchers know the underlying code or neural network of an image-generation technique, developing a detector that can identify images created by that AI model is a comparatively straightforward task.

It’s more challenging — and useful — to build a detector that can spot images generated by brand-new AI models.

StyleGAN3, a model developed by NVIDIA Research that will be presented at the NeurIPS 2021 AI conference in December, advances the state of the art in generative adversarial networks used to synthesize images. The breakthrough brings graphics principles in signal processing and image processing to GANs to avoid aliasing: a kind of image corruption often visible when images are rotated, scaled or translated.

NVIDIA researchers developed StyleGAN3 using a publicly released dataset of 70,000 images. Another 27,000 unreleased images from that collection, alongside AI-generated images from StyleGAN3, were shared with forensic research collaborators as a test dataset.

The collaboration with researchers enabled the community to assess how a diversity of different detector approaches performs in identifying images synthesized by StyleGAN3 — before the generator’s code was publicly released.

These detectors work in many different ways: Some may look for telltale correlations among groups of pixels produced by the neural network, while others might look for inconsistencies or asymmetries that give away synthetic images. Yet others attempt to reverse engineer the synthesis approach to estimate if a particular neural network could have created the image.

One of these detectors, GAN-Scanner, reaches up to 95 percent accuracy in identifying synthetic images generated with StyleGAN3, despite never having seen an image created by that model during training. Another detector, created by Politecnico di Milano, achieves an area under the curve of .999 (where a perfect classifier would achieve an AUC of 1.0).

Our work with researchers on StyleGAN3 showcases and supports the important, cutting-edge research done by media forensics groups. We hope it inspires others in the image-synthesis research community to participate in forensics research as well.

Source code for NVIDIA StyleGAN3 is available on GitHub, as well as results and links for the detector collaboration discussed here. The paper behind the research can be found on arXiv.

The GAN detector collaboration is part of Semantic Forensics (SemaFor), a program focused on forensic analysis of media organized by DARPA, the U.S. federal agency for technology research and development.

To learn more about the latest in AI research, watch NVIDIA CEO Jensen Huang’s keynote presentation at GTC below.

The post How Researchers Use NVIDIA AI to Help Mitigate Misinformation appeared first on The Official NVIDIA Blog.

Read More

Inside the DPU: Talk Describes an Engine Powering Data Center Networks

The tech world this week gets its first look under the hood of the NVIDIA BlueField data processing unit. The chip invented the category of the DPU last year, and it’s already being embraced by cloud services, supercomputers and many OEMs and software partners.

Idan Burstein, a principal architect leading our Israel-based BlueField design team, will describe the DPU’s architecture at Hot Chips, an annual conference that draws many of the world’s top microprocessor designers.

The talk will unveil a silicon engine for accelerating modern data centers. It’s an array of hardware accelerators and general-purpose Arm cores that speed networking, security and storage jobs.

Those jobs include virtualizing data center hardware while securing and smoothing the flow of network traffic. It’s work that involves accelerating in hardware a growing alphabet soup of tasks fundamental to running a data center, such as:

  • IPsec, TLS, AES-GCM, RegEx and Public Key Acceleration for security
  • NVMe-oF, RAID and GPUDirect Storage for storage
  • RDMA, RoCE, SR-IOV, VXLAN, VirtIO and GPUDirect RDMA for networking, and
  • Offloads for video streaming and time-sensitive communications

These workloads are growing faster than Moore’s law and already consume a third of server CPU cycles. DPUs pack purpose-built hardware to run these jobs more efficiently, making more CPU cores available for data center applications.

DPUs deliver virtualization and advanced security without compromising bare-metal performance. Their uses span the gamut from cloud computing and media streaming to storage, edge processing and high performance computing.

NVIDIA CEO Jensen Huang describes DPUs as “one of the three major pillars of computing going forward … The CPU is for general-purpose computing, the GPU is for accelerated computing and the DPU, which moves data around the data center, does data processing.”

A Full Plug-and-Play Stack

The good news for users is they don’t have to master the silicon details that may fascinate processor architects at Hot Chips. They can simply plug their existing software into familiar high-level software interfaces to harness the DPU’s power.

Those APIs are bundled into the DPU’s software stack called NVIDIA DOCA. It includes drivers, libraries, tools, documentation, example applications and a runtime environment for provisioning, deploying and orchestrating services on thousands of DPUs across the data center.

We’ve already received requests for early access to DOCA from hundreds of organizations, including several of the world’s industry leaders.

DOCA DPU software stack
DOCA provides a software platform for rapid development of networking, storage and security applications on the DPU.

DPUs Deliver for Data Centers, Clouds

The architecture described at Hot Chips is moving into several of the world’s largest clouds as well as a TOP500 supercomputer and integrated with next-generation firewalls. It will soon be available in systems from several top OEMs supported with software from more than a dozen other partners.

Today, multiple cloud service providers around the world are using or preparing to deploy BlueField DPUs to provision compute instances securely.

BlueField Powers Supercomputers, Firewalls

The University of Cambridge tapped into the DPU’s efficiencies to debut in June the fastest academic system in the U.K., a supercomputer that hit No. 3 on the Green500 list of the world’s most energy-efficient systems.

It’s the world’s first cloud-native supercomputer, letting researchers share virtual resources with privacy and security while not compromising performance.

With the VM-Series Next-Generation Firewall from Palo Alto Networks, every data center can now access the DPU’s security capabilities. The VM-Series NGFW can be accelerated with BlueField-2 to inspect network flows that were previously impossible or impractical to track.

The DPU will soon be available in systems from ASUS, Atos, Dell Technologies, Fujitsu, GIGABYTE, H3C, Inspur, Quanta/QCT and Supermicro, several of which announced plans at Computex in May.

More than a dozen software partners will support the NVIDIA BlueField DPUs, including:

  • VMware, with Project Monterey, which introduces DPUs to the more than 300,000 organizations that rely on VMware for its speed, resilience and security.
  • Red Hat, with an upcoming developer’s kit for Red Hat Enterprise Linux and Red Hat OpenShift, used by 95 percent of the Fortune 500.
  • Canonical, in Ubuntu Linux, the most popular operating system among public clouds.
  • Check Point Software Technologies, in products used by more than 100,000 organizations worldwide to prevent cyberattacks.

Other partners include Cloudflare, DDN, Excelero, F5, Fortinet, Guardicore, Juniper Networks, NetApp, Vast Data and WekaIO.

The support is broad because the opportunity is big.

“Every single networking chip in the world will be a smart networking chip … And that’s what the DPU is. It’s a data center on a chip,” said Collette Kress, NVIDIA’s CFO, in a May earnings call, predicting every server will someday sport a DPU.

DPU-Powered Networks on the Horizon

Market watchers at Dell’Oro Group forecast the number of smart networking ports shipped will nearly double from 4.4 million in 2020 to 7.4 million by 2025.

Gearing up for that growth, NVIDIA announced at GTC its roadmap for the next two generations of DPUs.

The BlueField-3, sampling next year, will drive networks up to 400 Gbit/second and pack the muscle of 300 x86 cores. The BlueField-4 will deliver an order of magnitude more performance with the addition of NVIDIA AI computing technologies.

What’s clear from the market momentum and this week’s Hot Chips talk is just as it has in AI, NVIDIA is now setting the pace in accelerated networking.

The post Inside the DPU: Talk Describes an Engine Powering Data Center Networks appeared first on The Official NVIDIA Blog.

Read More

Make History This GFN Thursday: ‘HUMANKIND’ Arrives on GeForce NOW

This GFN Thursday brings in the highly anticipated magnum opus from SEGA and Amplitude Studios, HUMANKIND, as well as exciting rewards to redeem for members playing Eternal Return.

There’s also updates on the newest Fortnite Season 7 game mode, “Impostors,” streaming on GeForce NOW.

Plus, there are nine games in total coming to the cloud this week.

The Future is in Your Hands

It’s time to make history. The exciting new turn-based historical strategy game HUMANKIND released this week and is streaming on GeForce NOW.

In HUMANKIND, you’ll be rewriting the entire narrative of human history and combining cultures to create a civilization as unique as you are. Combine up to 60 historical cultures as you lead your people from the Ancient to the Modern Age. From humble origins as a Neolithic tribe, transition to the Ancient Era as the Babylonians, become the Classical era Mayans, the Medieval Umayyads, the Early Modern-era British, and so on. Create a custom leader from these backgrounds to pave the way to the future.

Players will encounter historical events and make impactful moral decisions to develop the world as they see fit. Explore the natural wonders, discover scientific breakthroughs and make remarkable creations to leave your mark on the world. Master tactical turn-based battles and command your assembled armies to victory against strangers and friends in multiplayer matches of up to eight players. For every discovery, every battle and every deed, players gain fame — and the player with the most fame wins the game.

An awesome extra, unlock unique characters based on popular content creators, like GeForce NOW streamer BurkeBlack, by watching their HUMANKIND streams for unique drops.

HUMANKIND on GeForce NOW
Create a civilization that’s as unique as you are and become the most famous leader in history.

Gamers have been eagerly anticipating the release of HUMANKIND, and members will be able to experience this awesome new PC game when streaming on low-powered PCs, Macs, Chromebooks, SHIELD TVs or Android and iOS mobile devices with the power of GeForce NOW.

“GeForce NOW can invite even more players to experience the HUMANKIND journey,” said Romain de Waubert, studio head and chief creative officer at Amplitude Studios. “The service quickly and easily brings gamers into HUMANKIND with beautiful PC graphics on nearly any device.”

Tell your story your way. Play HUMANKIND this week on GeForce NOW and determine where you’ll take history.

Reap the Rewards

Playing games on GeForce NOW is great, and so is getting rewarded for playing.

Eternal Return on GeForce NOW
Members can enjoy awesome skin and emote rewards in Eternal Return.

The GeForce NOW rewards program is always looking to give members access to awesome rewards. This week brings a custom skin and custom emote for Eternal Return.

Getting rewarded for streaming games on the cloud is easy. Members should make sure to check the box for Rewards in the GeForce NOW account portal and opt in to receive newsletters for future updates and upcoming reward spoils.

Impostors Infiltrate Fortnite

Chapter 2 Season 7 of Fortnite also delivered a thrilling, new game mode. Members can play Fortnite “Impostors,” which recently was released on August 17.

Play in matches between four to 10 players of Agents versus Impostors on a brand new map – The Bridge. Agents win by completing minigame assignments to fill their progress bar or revealing all Impostors hiding among the team by calling discussions and voting out suspicious behavior.

While keeping their identity a secret, up to two Impostors will seek to eliminate enough Agents to overtake The Bridge. They can hide their status by completing assignments, which will benefit the progress of the Agent team, and have sneaky sabotage abilities to create chaos.

Whether playing as an Agent or as an Impostor, this game is set to be a great time. Stream it today on GeForce NOW.

It’s Game Time

RiMS Racing on GeForce NOW
Ride the world’s most powerful motorbikes in RiMS Racing this week on GeForce NOW.

As always, GFN Thursday means new games coming to the cloud every week. Members can look forward to being able to stream these nine titles joining the GeForce NOW library:

With all of these new games, it’s always a good time to play. Speaking of time, we’ve got a question about your favorite games:

past, present, or future

what’s your favorite time period to play in?

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) August 18, 2021

Let us know on Twitter or in the comments below.

The post Make History This GFN Thursday: ‘HUMANKIND’ Arrives on GeForce NOW appeared first on The Official NVIDIA Blog.

Read More