Catch Some Rays This GFN Thursday With ‘Jurassic World Evolution 2’ and ‘Bright Memory: Infinite’ Game Launches

This week’s GFN Thursday packs a prehistoric punch with the release of Jurassic World Evolution 2. It also gets infinitely brighter with the release of Bright Memory: Infinite.

Both games feature NVIDIA RTX technologies and are part of the six titles joining the GeForce NOW library this week.

GeForce NOW RTX 3080 members will get the peak cloud gaming experience in these titles and more. In addition to RTX ON, they’ll stream both games at up to 1440p and 120 frames per second on PC and Mac; and up to 4K on SHIELD.

Preorders for six-month GeForce NOW RTX 3080 memberships are currently available in North America and Europe for $99.99. Sign up today to be among the first to experience next-generation gaming.

The Latest Tech, Streaming From the Cloud

GeForce RTX GPUs give PC gamers the best visual quality and highest frame rates. They also power NVIDIA RTX technologies. And with GeForce RTX 3080-class GPUs making their way to the cloud in the GeForce NOW SuperPOD, the most advanced platform for ray tracing and AI is now available across nearly any low-powered device.

GeForce NOW SuperPOD
The next generation of cloud gaming is powered by the GeForce NOW SuperPOD, built on the second-gen RTX, NVIDIA Ampere architecture.

Real-time ray tracing creates the most realistic and immersive graphics in supported games, rendering environments in cinematic quality. NVIDIA DLSS gives games a speed boost with uncompromised image quality, thanks to advanced AI.

With GeForce NOW’s Priority and RTX 3080 memberships, gamers can take advantage of these features in numerous top games, including new releases like Jurassic World Evolution 2 and Bright Memory: Infinite.

The added performance from the latest generation of NVIDIA GPUs also means GeForce NOW RTX 3080 members have exclusive access to stream at up to 1440p at 120 FPS on PC, 1600p at 120 FPS on most MacBooks, 1440p at 120 FPS on most iMacs, 4K HDR at 60 FPS on NVIDIA SHIELD TV and up to 120 FPS on select Android devices.

Welcome to …

Immerse yourself in a world evolved in a compelling, original story, experience the chaos of “what-if” scenarios from the iconic Jurassic World and Jurassic Park films and discover over 75 awe-inspiring dinosaurs, including brand-new flying and marine reptiles. Play with support for NVIDIA DLSS this week on GeForce NOW.

GeForce NOW gives your low-end rig the power to play Jurassic World Evolution 2 with even higher graphics settings thanks to NVIDIA DLSS, streaming from the cloud.

Blinded by the (Ray-Traced) Light

FYQD-studio, a one-man development team that released Bright Memory in 2020, is back with a full-length sequel, Bright Memory: Infinite, streaming from the cloud with RTX ON.

Bright Memory: Infinite combines the FPS and action genres with dazzling visuals, amazing set pieces and exciting action. Mix and match available skills and abilities to unleash magnificent combos on enemies. Cut through the opposing forces with your sword, or lock and load with ranged weaponry, customized with a variety of ammunition. The choice is yours.

Priority and GeForce NOW RTX 3080 members can experience every moment of the action the way FYQD-studio intended, gorgeously rendered with ray-traced reflections, ray-traced shadows, ray-traced caustics and dazzling RTX Global Illumination. And GeForce NOW RTX 3080 members can play at up to 1440p and 120 FPS on PC and Mac.

Never Run Out of Gaming

GFN Thursday always means more games.

Members can find these six new games streaming on the cloud this week:

  • Bright Memory: Infinite (new game launch on Steam)
  • Epic Chef (new game launch on Steam)
  • Jurassic World Evolution 2 (new game on launch on Steam and Epic Games Store)
  • MapleStory (Steam)
  • Severed Steel (Steam)
  • Tale of Immortal (Steam)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post Catch Some Rays This GFN Thursday With ‘Jurassic World Evolution 2’ and ‘Bright Memory: Infinite’ Game Launches appeared first on The Official NVIDIA Blog.

Read More

How Researchers Use NVIDIA AI to Help Mitigate Misinformation

Researchers tackling the challenge of visual misinformation — think the TikTok video of Tom Cruise supposedly golfing in Italy during the pandemic — must continuously advance their tools to identify AI-generated images.

NVIDIA is furthering this effort by collaborating with researchers to support the development and testing of detector algorithms on our state-of-the-art image-generation models.

By crafting a dataset of highly realistic images with StyleGAN3 — our latest, state-of-the-art media generation algorithm — NVIDIA provided crucial information to researchers testing how well their detector algorithms work when tested on AI-generated images created by previously unseen techniques. These detectors help experts identify and analyze synthetic images to combat visual misinformation.

At this week’s NVIDIA GTC, this work was shared in a session titled “Alias-Free Generative Adversarial Networks,” which provided an overview of StyleGAN3. To watch on demand,  register free for GTC.

“This has been a unique situation in that people doing image generation detection have worked closely with the people at NVIDIA doing image generation,” said Edward Delp, a professor at Purdue University and principal investigator of one of the research teams. “This collaboration with NVIDIA has allowed us to build even better and more robust detectors. The ‘early access’ approach used by NVIDIA is an excellent way to further forensics research.”

Advancing Media Forensics With StyleGAN3 Images

When researchers know the underlying code or neural network of an image-generation technique, developing a detector that can identify images created by that AI model is a comparatively straightforward task.

It’s more challenging — and useful — to build a detector that can spot images generated by brand-new AI models.

StyleGAN3, a model developed by NVIDIA Research that will be presented at the NeurIPS 2021 AI conference in December, advances the state of the art in generative adversarial networks used to synthesize images. The breakthrough brings graphics principles in signal processing and image processing to GANs to avoid aliasing: a kind of image corruption often visible when images are rotated, scaled or translated.

NVIDIA researchers developed StyleGAN3 using a publicly released dataset of 70,000 images. Another 27,000 unreleased images from that collection, alongside AI-generated images from StyleGAN3, were shared with forensic research collaborators as a test dataset.

The collaboration with researchers enabled the community to assess how a diversity of different detector approaches performs in identifying images synthesized by StyleGAN3 — before the generator’s code was publicly released.

These detectors work in many different ways: Some may look for telltale correlations among groups of pixels produced by the neural network, while others might look for inconsistencies or asymmetries that give away synthetic images. Yet others attempt to reverse engineer the synthesis approach to estimate if a particular neural network could have created the image.

One of these detectors, GAN-Scanner, reaches up to 95 percent accuracy in identifying synthetic images generated with StyleGAN3, despite never having seen an image created by that model during training. Another detector, created by Politecnico di Milano, achieves an area under the curve of .999 (where a perfect classifier would achieve an AUC of 1.0).

Our work with researchers on StyleGAN3 showcases and supports the important, cutting-edge research done by media forensics groups. We hope it inspires others in the image-synthesis research community to participate in forensics research as well.

Source code for NVIDIA StyleGAN3 is available on GitHub, as well as results and links for the detector collaboration discussed here. The paper behind the research can be found on arXiv.

The GAN detector collaboration is part of Semantic Forensics (SemaFor), a program focused on forensic analysis of media organized by DARPA, the U.S. federal agency for technology research and development.

To learn more about the latest in AI research, watch NVIDIA CEO Jensen Huang’s keynote presentation at GTC below.

The post How Researchers Use NVIDIA AI to Help Mitigate Misinformation appeared first on The Official NVIDIA Blog.

Read More

Inside the DPU: Talk Describes an Engine Powering Data Center Networks

The tech world this week gets its first look under the hood of the NVIDIA BlueField data processing unit. The chip invented the category of the DPU last year, and it’s already being embraced by cloud services, supercomputers and many OEMs and software partners.

Idan Burstein, a principal architect leading our Israel-based BlueField design team, will describe the DPU’s architecture at Hot Chips, an annual conference that draws many of the world’s top microprocessor designers.

The talk will unveil a silicon engine for accelerating modern data centers. It’s an array of hardware accelerators and general-purpose Arm cores that speed networking, security and storage jobs.

Those jobs include virtualizing data center hardware while securing and smoothing the flow of network traffic. It’s work that involves accelerating in hardware a growing alphabet soup of tasks fundamental to running a data center, such as:

  • IPsec, TLS, AES-GCM, RegEx and Public Key Acceleration for security
  • NVMe-oF, RAID and GPUDirect Storage for storage
  • RDMA, RoCE, SR-IOV, VXLAN, VirtIO and GPUDirect RDMA for networking, and
  • Offloads for video streaming and time-sensitive communications

These workloads are growing faster than Moore’s law and already consume a third of server CPU cycles. DPUs pack purpose-built hardware to run these jobs more efficiently, making more CPU cores available for data center applications.

DPUs deliver virtualization and advanced security without compromising bare-metal performance. Their uses span the gamut from cloud computing and media streaming to storage, edge processing and high performance computing.

NVIDIA CEO Jensen Huang describes DPUs as “one of the three major pillars of computing going forward … The CPU is for general-purpose computing, the GPU is for accelerated computing and the DPU, which moves data around the data center, does data processing.”

A Full Plug-and-Play Stack

The good news for users is they don’t have to master the silicon details that may fascinate processor architects at Hot Chips. They can simply plug their existing software into familiar high-level software interfaces to harness the DPU’s power.

Those APIs are bundled into the DPU’s software stack called NVIDIA DOCA. It includes drivers, libraries, tools, documentation, example applications and a runtime environment for provisioning, deploying and orchestrating services on thousands of DPUs across the data center.

We’ve already received requests for early access to DOCA from hundreds of organizations, including several of the world’s industry leaders.

DOCA DPU software stack
DOCA provides a software platform for rapid development of networking, storage and security applications on the DPU.

DPUs Deliver for Data Centers, Clouds

The architecture described at Hot Chips is moving into several of the world’s largest clouds as well as a TOP500 supercomputer and integrated with next-generation firewalls. It will soon be available in systems from several top OEMs supported with software from more than a dozen other partners.

Today, multiple cloud service providers around the world are using or preparing to deploy BlueField DPUs to provision compute instances securely.

BlueField Powers Supercomputers, Firewalls

The University of Cambridge tapped into the DPU’s efficiencies to debut in June the fastest academic system in the U.K., a supercomputer that hit No. 3 on the Green500 list of the world’s most energy-efficient systems.

It’s the world’s first cloud-native supercomputer, letting researchers share virtual resources with privacy and security while not compromising performance.

With the VM-Series Next-Generation Firewall from Palo Alto Networks, every data center can now access the DPU’s security capabilities. The VM-Series NGFW can be accelerated with BlueField-2 to inspect network flows that were previously impossible or impractical to track.

The DPU will soon be available in systems from ASUS, Atos, Dell Technologies, Fujitsu, GIGABYTE, H3C, Inspur, Quanta/QCT and Supermicro, several of which announced plans at Computex in May.

More than a dozen software partners will support the NVIDIA BlueField DPUs, including:

  • VMware, with Project Monterey, which introduces DPUs to the more than 300,000 organizations that rely on VMware for its speed, resilience and security.
  • Red Hat, with an upcoming developer’s kit for Red Hat Enterprise Linux and Red Hat OpenShift, used by 95 percent of the Fortune 500.
  • Canonical, in Ubuntu Linux, the most popular operating system among public clouds.
  • Check Point Software Technologies, in products used by more than 100,000 organizations worldwide to prevent cyberattacks.

Other partners include Cloudflare, DDN, Excelero, F5, Fortinet, Guardicore, Juniper Networks, NetApp, Vast Data and WekaIO.

The support is broad because the opportunity is big.

“Every single networking chip in the world will be a smart networking chip … And that’s what the DPU is. It’s a data center on a chip,” said Collette Kress, NVIDIA’s CFO, in a May earnings call, predicting every server will someday sport a DPU.

DPU-Powered Networks on the Horizon

Market watchers at Dell’Oro Group forecast the number of smart networking ports shipped will nearly double from 4.4 million in 2020 to 7.4 million by 2025.

Gearing up for that growth, NVIDIA announced at GTC its roadmap for the next two generations of DPUs.

The BlueField-3, sampling next year, will drive networks up to 400 Gbit/second and pack the muscle of 300 x86 cores. The BlueField-4 will deliver an order of magnitude more performance with the addition of NVIDIA AI computing technologies.

What’s clear from the market momentum and this week’s Hot Chips talk is just as it has in AI, NVIDIA is now setting the pace in accelerated networking.

The post Inside the DPU: Talk Describes an Engine Powering Data Center Networks appeared first on The Official NVIDIA Blog.

Read More

Make History This GFN Thursday: ‘HUMANKIND’ Arrives on GeForce NOW

This GFN Thursday brings in the highly anticipated magnum opus from SEGA and Amplitude Studios, HUMANKIND, as well as exciting rewards to redeem for members playing Eternal Return.

There’s also updates on the newest Fortnite Season 7 game mode, “Impostors,” streaming on GeForce NOW.

Plus, there are nine games in total coming to the cloud this week.

The Future is in Your Hands

It’s time to make history. The exciting new turn-based historical strategy game HUMANKIND released this week and is streaming on GeForce NOW.

In HUMANKIND, you’ll be rewriting the entire narrative of human history and combining cultures to create a civilization as unique as you are. Combine up to 60 historical cultures as you lead your people from the Ancient to the Modern Age. From humble origins as a Neolithic tribe, transition to the Ancient Era as the Babylonians, become the Classical era Mayans, the Medieval Umayyads, the Early Modern-era British, and so on. Create a custom leader from these backgrounds to pave the way to the future.

Players will encounter historical events and make impactful moral decisions to develop the world as they see fit. Explore the natural wonders, discover scientific breakthroughs and make remarkable creations to leave your mark on the world. Master tactical turn-based battles and command your assembled armies to victory against strangers and friends in multiplayer matches of up to eight players. For every discovery, every battle and every deed, players gain fame — and the player with the most fame wins the game.

An awesome extra, unlock unique characters based on popular content creators, like GeForce NOW streamer BurkeBlack, by watching their HUMANKIND streams for unique drops.

HUMANKIND on GeForce NOW
Create a civilization that’s as unique as you are and become the most famous leader in history.

Gamers have been eagerly anticipating the release of HUMANKIND, and members will be able to experience this awesome new PC game when streaming on low-powered PCs, Macs, Chromebooks, SHIELD TVs or Android and iOS mobile devices with the power of GeForce NOW.

“GeForce NOW can invite even more players to experience the HUMANKIND journey,” said Romain de Waubert, studio head and chief creative officer at Amplitude Studios. “The service quickly and easily brings gamers into HUMANKIND with beautiful PC graphics on nearly any device.”

Tell your story your way. Play HUMANKIND this week on GeForce NOW and determine where you’ll take history.

Reap the Rewards

Playing games on GeForce NOW is great, and so is getting rewarded for playing.

Eternal Return on GeForce NOW
Members can enjoy awesome skin and emote rewards in Eternal Return.

The GeForce NOW rewards program is always looking to give members access to awesome rewards. This week brings a custom skin and custom emote for Eternal Return.

Getting rewarded for streaming games on the cloud is easy. Members should make sure to check the box for Rewards in the GeForce NOW account portal and opt in to receive newsletters for future updates and upcoming reward spoils.

Impostors Infiltrate Fortnite

Chapter 2 Season 7 of Fortnite also delivered a thrilling, new game mode. Members can play Fortnite “Impostors,” which recently was released on August 17.

Play in matches between four to 10 players of Agents versus Impostors on a brand new map – The Bridge. Agents win by completing minigame assignments to fill their progress bar or revealing all Impostors hiding among the team by calling discussions and voting out suspicious behavior.

While keeping their identity a secret, up to two Impostors will seek to eliminate enough Agents to overtake The Bridge. They can hide their status by completing assignments, which will benefit the progress of the Agent team, and have sneaky sabotage abilities to create chaos.

Whether playing as an Agent or as an Impostor, this game is set to be a great time. Stream it today on GeForce NOW.

It’s Game Time

RiMS Racing on GeForce NOW
Ride the world’s most powerful motorbikes in RiMS Racing this week on GeForce NOW.

As always, GFN Thursday means new games coming to the cloud every week. Members can look forward to being able to stream these nine titles joining the GeForce NOW library:

With all of these new games, it’s always a good time to play. Speaking of time, we’ve got a question about your favorite games:

past, present, or future

what’s your favorite time period to play in?

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) August 18, 2021

Let us know on Twitter or in the comments below.

The post Make History This GFN Thursday: ‘HUMANKIND’ Arrives on GeForce NOW appeared first on The Official NVIDIA Blog.

Read More

Big Computer on Campus: Universities Graduate to AI Super Systems

This back-to-school season, many universities are powering on brand new AI supercomputers. Researchers and students working in fields from basic science to liberal arts can’t wait to log on.

“They would like to use it right now,” said James Wilgenbusch, director of research computing at the University of Minnesota, speaking of Agate, an accelerated supercomputer Hewlett Packard Enterprise is building.

It’s one of at least eight new academic systems lighting up around the world — four in America’s heartland and two in the U.K.

Before the semester’s end, Agate will deliver seven petaflops of umph. It will crunch through “research from socio-economic trends to celestial objects — it really will serve the full gamut,” he said of the system to be housed at the Minnesota Supercomputing Institute (MSI) that will link 265 NVIDIA A100 Tensor Core GPUs on an NVIDIA HDR 200Gb/s InfiniBand network.

Agate will serve about 4,500 users, working under a thousand principal investigators who since January have already run a whopping 138,612 GPU-accelerated jobs on MSI’s existing systems.

Agate supercomputer at MSI
Getting fired up: The Agate supercomputer in Chippewa Falls undergoes burn-in testing. (Picture courtesy HPE)

“We’re seeing annual user growth, the greatest amount of it in life sciences and liberal arts — fields like geology, history, poli-sci, marketing — anywhere people have vast quantities of unstructured data and they’re attempting to make sense of it,” he said.

AI Supercomputer Helps Fight COVID

Demonstrating the power of accelerated computing, the Minnesota Department of Health reserved a portion of MSI’s system in its fight against COVID-19. It’s sequencing genomes for contract tracing and to track variants of the coronavirus.

“Collaborations like this make the role of universities in innovation and life saving more obvious to the public,” said Wilgenbusch, pointing to articles in a Minneapolis newspaper.

Virtual GPUs Power Indiana Classrooms

Some 600 miles southeast, Indiana University (IU) is standing up two AI supercomputers packing a total of 616 A100 GPUs.

Big Red 200, built by Hewlett Packard Enterprise, will serve the nine IU campuses. Jetstream-2, built by Dell Technologies, will power work at several partner institutions from Cornell to the University of Hawaii.

Tapping the A100’s ability to offer fractions of a processor, Jetstream-2 will host classes with hundreds of students, each using a slice of a GPU’s performance to learn popular AI skills like image classification. One IU researcher presented a paper last November benchmarking the virtual GPU capability.

“Now whole classrooms can be trained in one go, so more people get access,” said Winona Snapp-Childs, chief operating officer of IU’s Pervasive Technology Institute and leader of an AI-for-everyone initiative.

A Vision of Ubiquitous AI

More than 2,500 students use IU’s current GPU-accelerated systems. They ran more than 40 percent of the work for the university’s record $1 billion of research contracts and grants spread across 178 departments last year.

“Funding agencies realize the importance of machine learning in academic fields across the spectrum,” said Snapp-Childs.

“AI and accelerated computing help push the boundaries of science, and I can imagine they will come to handle half of our research over the next 5 to 10 years as these techniques become ubiquitous and imperative for research,” she added.

The work spans a spectrum that can set your head spinning. Researchers are tapping AI for everything from tracking down COVID misinformation on social networks to studying the genome of rice to improve harvests.

Delta Pioneers Accessible Supercomputing

Next door, the National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign is expanding use of accelerated computing with Delta, an AI supercomputer packing more than 800 A100 GPUs.

“We will help emerging research areas such as computational archaeology and digital agriculture take advantage of new computing methods and hardware while making advanced systems more usable and accessible to a broad community of researchers,” said William Gropp, a principal investigator and NCSA director who oversees Delta.

The system is one way the National Science Foundation is spreading GPU-based computing as a common tool for accelerating research. The work includes an initiative to make Delta and future systems more accessible to people with disabilities.

Florida Spreads the AI Sunshine

A thousand miles south, the University of Florida’s HiPerGator AI system provides another shining example of accelerated computing.

In a recent article in the Gainesville Sun, provost Joe Glover said the system will spread AI skills much like Henry Ford’s first assembly line made cars affordable for Americans. The university aims to add 100 AI-focused faculty to make machine learning ubiquitous across its curriculum with a stated goal of creating 30,000 AI-enabled graduates by 2030.

HiPerGator AI linked a whopping 1,120 A100 GPUs on a HDR 200Gb/s InfiniBand network to take the No. 22 spot in the latest TOP500 list of the world’s fastest supercomputers. It was built in just a few weeks thanks to its use of the NVIDIA DGX SuperPOD reference architecture, a recipe for stacking NVIDIA DGX systems in Lego-like style.

Studying Abroad: AI Supercomputing’s Far Reach

These five AI supercomputers represent just a few peaks in a rising range that crisscrosses the U.S. and Europe.

  • On the UC Berkeley campus, researchers just turned on Perlmutter, the world’s fifth fastest system, packing 6,144 A100 GPUs.
  • The University of Cambridge debuted CSD3, a cloud-native supercomputer built on Dell EMC PowerEdge, which is now the fastest academic system in the U.K. and hit No. 3 on the Green500 list of the world’s most energy-efficient systems.
  • The University of Edinburgh is building a system with 448 A100 GPUs, the latest in the four-system network run by the DiRAC research group in the U.K.
  • And Linköping University is now home to Sweden’s largest supercomputer, BerzeLiUs, which will serve a national AI initiative and be shared with researchers at Singapore’s Nanyang Technical University.

They are among high-performance systems sprinkled around the world, advancing science with machine learning and accelerated computing.

Photo at top: From left, Winona Snapp-Childs and Sheri Sanders, Director of the National Center for Genome Analysis Support, give students Christine Campbell and Lyric Cooper a tour of the Jetstream data center at Indiana University.

The post Big Computer on Campus: Universities Graduate to AI Super Systems appeared first on The Official NVIDIA Blog.

Read More

What Is a Machine Learning Model?

When you shop for a car, the first question is what model — a Honda Civic for low-cost commuting, a Chevy Corvette for looking good and moving fast, or maybe a Ford F-150 to tote heavy loads.

For the journey to AI, the most transformational technology of our time, the engine you need is a machine learning model.

What Is a Machine Learning Model?

A machine learning model is an expression of an algorithm that combs through mountains of data to find patterns or make predictions. Fueled by data, machine learning (ML) models are the mathematical engines of artificial intelligence.

For example, an ML model for computer vision might be able to identify cars and pedestrians in a real-time video. One for natural language processing might translate words and sentences.

Under the hood, a model is a mathematical representation of objects and their relationships to each other. The objects can be anything from “likes” on a social networking post to molecules in a lab experiment.

ML Models for Every Purpose

With no constraints on the objects that can become features in an ML model, there’s no limit to the uses for AI. The combinations are infinite.

Data scientists have created whole families of machine learning models for different uses, and more are in the works.

A Brief Taxonomy of ML Models

ML Model Type Uses Cases
Linear regression/classification Patterns in numeric data, such as financial spreadsheets
Graphic models Fraud detection or sentiment awareness
Decision trees/Random forests Predicting outcomes
Deep learning neural networks Computer vision, natural language processing and more

For instance, linear models use algebra to predict relationships between variables in financial projections. Graphical models express as diagrams a probability, such as whether a consumer will choose to buy a product. Borrowing the metaphor of branches, some ML models take the form of decision trees or groups of them called random forests.

In the Big Bang of AI in 2012, researchers found deep learning to be one of the most successful techniques for finding patterns and making predictions. It uses a kind of machine learning model called a neural network because it was inspired by the patterns and functions of brain cells.

An ML Model for the Masses

Deep learning took its name from the structure of its machine learning models. They stack layer upon layer of features and their relationships, forming a mathematical hero sandwich.

Thanks to their uncanny accuracy in finding patterns, two kinds of deep learning models, described in a separate explainer, are appearing everywhere.

Convolutional neural networks (CNNs), often used in computer vision, act like eyes in autonomous vehicles and can help spot diseases in medical imaging. Recurrent neural networks and transformers (RNNs), tuned to analyze spoken and written language, are the engines of Amazon’s Alexa, Google’s Assistant and Apple’s Siri.

Diagram showing how a deep neural network sees.
Deep learning neural networks got their name from their multilayered structure.

Pssssst, Pick a Pretrained Model

Choosing the right family of models — like a CNN, RNN or transformer — is a great beginning. But that’s just the start.

If you want to ride the Baja 500, you can modify a stock dune buggy with heavy duty shocks and rugged tires, or you can shop for a vehicle built for that race.

In machine learning, that’s what’s called a pretrained model. It’s tuned on large sets of training data that are similar to data in your use case. Data relationships — called weights and biases — are optimized for the intended application.

It takes an enormous dataset, a lot of AI expertise and significant compute muscle to train a model. Savvy buyers shop for pretrained models to save time and money.

Who Ya Gonna Call?

When you’re shopping for a pretrained model, find a dealer you can trust.

NVIDIA puts its name behind an online library called the NGC catalog that’s filled with vetted, pretrained models. They span the spectrum of AI jobs from computer vision and conversational AI and more.

Users know what they’re getting because models in the catalog come with résumés. They’re like the credentials of a prospective hire.

Model resumes show you the domain the model was trained for, the dataset that trained it, and how it’s expected to perform. They provide transparency and confidence you’re picking the right model for your use case.

More Resources for ML Models

What’s more, NGC models are ready for transfer learning. That’s the one final tune-up that torques models for the exact road conditions over which they’ll ride — your application’s data.

NVIDIA even provides the wrench to tune your NGC model. It’s called TAO and you can sign up for early access to it today.

To learn more, check out:

The post What Is a Machine Learning Model? appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Brings Metaverse Momentum, Research Breakthroughs and New Pro GPU to SIGGRAPH 

Award-winning research, stunning demos, a sweeping vision for how NVIDIA Omniverse will accelerate the work of millions more professionals, and a new pro RTX GPU were the highlights at this week’s SIGGRAPH pro graphics conference.

Kicking off the week, NVIDA’s SIGGRAPH special address featuring Richard Kerris, vice president, Omniverse, and Sanja Fidler, senior director, AI research, with an intro by Pixar co-founder Alvy Ray Smith gathered more than 1.6 million views in just 48 hours.

A documentary launched Wednesday, “Connecting in the Metaverse: The Making of the GTC Keynote”  – a behind-the-scenes view into how a small team of artists were able to blur the lines between real and rendered in NVIDIA’s GTC21 keynote achieved more than 360,000 views within the first 24 hours.

In all, NVIDIA brought together professionals from every corner of the industry, hosting over 12 sessions and launching 22 demos this week.

Among the highlights:

It was a week packed with innovations, many captured in a new sizzle reel crammed with new technologies.

Sessions from  the NVIDIA Deep Learning Institute brought the latest ideas to veteran developers and students alike.

And the inaugural gathering of the NVIDIA Omniverse User Group brought more than 400 graphics professionals from all over the world together to learn about what’s coming next for Omniverse, to celebrate the work of the community, and announce the winners of the second #CreatewithMarbles: Marvelous Machine contest.

“Your work fuels what we do,” Rev Lebaredian, vice president of Omniverse engineering and simulation at NVIDIA told the scores of Omniverse users gathered for the event.

NVIDIA has been part of the SIGGRAPH community since 1993, with close to 150 papers accepted and NVIDIA employees leading more than 200 technical talks.

And SIGGRAPH has been the venue for some of NVIDIA’s biggest announcements — from OptiX in 2010 to the launch of NVIDIA RTX real-time ray tracing in 2018.

NVIDIA RTX A2000 Makes RTX More Accessible to More Pros

Since then, thanks to its powerful real-time ray tracing and AI acceleration capabilities, NVIDIA RTX technology has transformed design and visualization workflows for the most complex tasks.

Introduced Tuesday, the new NVIDIA RTX A2000 — our most compact, power-efficient GPU — makes it easier to access RTX from anywhere. With the unique packaging of the A2000, there are many new form factors, from backs of displays to edge devices, that are now able to incorporate RTX technology.

The RTX A2000 is designed for everyday workflows, so more professionals can develop photorealistic renderings, build physically accurate simulations and use AI-accelerated tools.

The GPU has 6GB of memory capacity with error correction code, or ECC, to maintain data integrity for uncompromised computing accuracy and reliability.

With remote work part of the new normal, simultaneous collaboration with colleagues on projects across the globe is critical.

NVIDIA RTX technology powers Omniverse, our collaboration and simulation platform that enables teams to iterate together on a single 3D design in real time while working across different software applications.

The A2000 will serve as a portal into this world for millions of designers.

Building the Metaverse

NVIDIA also announced a major expansion of NVIDIA Omniverse — the world’s first simulation and collaboration platform — through new integrations with Blender and Adobe that will open it to millions more users.

Omniverse makes it possible for designers, artists and reviewers to work together in real-time across leading software applications in a shared virtual world from anywhere.

Blender, the world’s leading open-source 3D animation tool, will now have Universal Scene Description, or USD, support, enabling artists to access Omniverse production pipelines.

Adobe is collaborating with NVIDIA on a Substance 3D plugin that will bring Substance Material support to Omniverse, unlocking new material editing capabilities for Omniverse and Substance 3D users.

So far, professionals at over 500 companies, including BMW, Volvo, SHoP Architects, South Park and Lockheed Martin, are evaluating the platform. Since the launch of its open beta in December, Omniverse has been downloaded by over 50,000 individual creators.

NVIDIA Research Showcases Digital Avatars at SIGGRAPH

More innovations are coming.

Highlighting their ongoing contributions to cutting-edge computer graphics, NVIDIA researchers put four AI models to work to serve up a stunning digital avatar demo for SIGGRAPH 2021’s Real-Time Live showcase.

Broadcasting live from our Silicon Valley headquarters, the NVIDIA Research team presented a collection of AI models that can create lifelike virtual characters for projects such as  bandwidth-efficient video conferencing and storytelling.

The demo featured tools to generate digital avatars from a single photo, animate avatars with natural 3D facial motion and convert text to speech.

The demo was just one highlight among a host of contributions from the more than 200 scientists who make up the NVIDIA Research team at this year’s conference.

Papers presented include:

NVIDIA Deep Learning Institute

These innovations quickly become tools that NVIDIA is hustling to bring to graphics professionals.

Created to help professionals and students master skills that will help them quickly advance their work, NVIDIA’s Deep Learning Institute held sessions covering a range of key technologies at SIGGRAPH.

They included a self-paced training on Getting Started with USD, a live instructor-led course on fundamentals of ray tracing, Using NVIDIA Nsight Graphics and NVIDIA Nsight Systems, a Masterclass by the Masters series on NVIDIA Omniverse, and a Graphics and NVIDIA Omniverse Teaching Kit for educators looking to incorporate hands-on technical training into student coursework. 

NVIDIA also showcased how its technology is transforming workflows in several demos, including:

  • Factory of the Future: Participants explored the next era of manufacturing with this demo, which showcases BMW Group’s factory of the future — designed, simulated, operated and maintained entirely in NVIDIA Omniverse.
  • Multiple Artists, One Server: SIGGRAPH attendees could learn how teams can accelerate visual effects production with the NVIDIA EGX platform, which enables multiple artists to work together on a powerful, secure server from anywhere.
  • 3D Photogrammetry on an RTX Mobile Workstation: Participants got to watch how NVIDIA RTX-powered mobile workstations help drive the process of 3D scanning using photogrammetry, whether in a studio or a remote location.
  • Interactive Volumes with NanoVDB in Blender Cycles: Attendees learned how NanoVDB makes volume rendering more GPU memory efficient, meaning larger and more complex scenes can be interactively adjusted and rendered with NVIDIA RTX-accelerated ray tracing and AI denoising.

Want to catch up on all the news from SIGGRAPH? Visit our hub for all things NVIDIA and SIGGRAPH at https://www.nvidia.com/en-us/events/siggraph/

The post NVIDIA Brings Metaverse Momentum, Research Breakthroughs and New Pro GPU to SIGGRAPH  appeared first on The Official NVIDIA Blog.

Read More

Hooked on a Feeling: GFN Thursday Brings ‘NARAKA: BLADEPOINT’ to GeForce NOW

Calling all warriors. It’s a glorious week full of new games.

This GFN Thursday comes with the exciting release of the new battle royale NARAKA: BLADEPOINT, as well as the Hello Neighbor franchise as part of the 11 great games joining the GeForce NOW library this week.

Plus, the newest Assassin’s Creed Valhalla DLC has arrived on the cloud.

Real PC Games, Real PC Power

Gaming on GeForce NOW means having access to the real versions of PC games. And there are more than 1,000 PC titles streaming from the cloud, with more on the way every week. It also means being able to play these titles across devices like low-powered PCs, Macs, Chromebooks, SHIELD TVs or Android and iOS mobile devices with the power of the cloud.

Members can play new and exciting PC games like NARAKA:BLADEPOINT with the power of a gaming rig streaming to any GeForce NOW compatible device at GeForce-level performance.

Melee Meets Battle Royale

Only one can remain. The melee, combat battle royale NARAKA: BLADEPOINT is now available on Steam and can be streamed on GeForce NOW. It’ll also be available to stream from the Epic Games Store upon its release in September.

NARAKA: BLADEPOINT on GeForce NOW
How far will your grappling hook take you in the challenge on Morus Island?

Sixty players, heroes from around the world, will gather on Morus Island — and one will emerge victorious. Explore the vast, interactive world with a vertical design and experience unique gameplay powered by parkour and grappling hook movement. Learn to best use the brand-new resurrection system and unique character skills of a roster of characters with powerful abilities. And enjoy a vast arsenal of melee and ranged weapons along with the thrill of clashing blades and arrows flying in the battlefield.

Make your move. Press the assault on enemies with a grappling hook that can be aimed at anyone, anywhere and used to zip through obstacles to pounce on targets. Ambush opponents by hiding in the darkness and waiting for the right moment with deadly long-range takedowns or sneaky melee attacks. And avoid fights with a quick escape from less-favorable battles with a well-aimed grappling hook maneuver. Play your way to achieve victory.

NARAKA: BLADEPOINT on GeForce NOW
Become the ultimate warrior and crush your enemies in this new battle royale.

Thanks to the GeForce power of the cloud, gamers can battle with the best and all other online PC gamers playing awesome multiplayer games like NARAKA: BLADEPOINT.

“It’s great that GeForce NOW can introduce gamers playing on low-powered hardware to the stunning world of NARAKA,” said Ray Kuan, lead producer. “We love that more gamers will be able to enter the battlefield and enjoy the next generation of battle royale games in full PC glory across all of their devices.”

Become the last warrior standing and learn the truth of NARAKA’s world and its endless battles on GeForce NOW this week.

Hello, It’s the Games of the Week

This GFN Thursday is packed with 11 new titles available to stream on GeForce NOW, including the stealth horror franchise, Hello Neighbor.

Hello Neighbor on GeForce NOW
Find out what’s in the basement of your neighbor’s home in Hello Neighbor. Just don’t get caught.

What’s your neighbor hiding? Members can find out and play Hello Neighbor, a suspenseful story of sneaking into your neighbor’s house to figure out what horrible secrets he’s hiding in the basement. Don’t get too comfortable — The Neighbor will learn from your every move and leave nasty surprises for you.

And stream the dramatic prequel, Hello Neighbor: Hide and Seek, to follow the tragic story of the loss of a family member while playing a game of hide-and-seek that leads to the game that started it all.

The full list of awesome games joining the service this week includes:

Finally, members will be able to sack a famous city and play the glorious new Assassin’s Creed Valhalla: The Siege of Paris DLC upon release today on GeForce NOW.

While you plan your gaming escape this weekend, we’ve got an important question for you.

Some games are so gorgeous, they make us never want to leave.

If you had to spend your summer vacation in a game which one would it be? 🏖

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) August 11, 2021

Tell us on Twitter or in the comments below, and we’ll catch up next week!

The post Hooked on a Feeling: GFN Thursday Brings ‘NARAKA: BLADEPOINT’ to GeForce NOW appeared first on The Official NVIDIA Blog.

Read More

From Our Kitchen to Yours: NVIDIA Omniverse Changes the Way Industries Collaborate

Talk about a magic trick. One moment, NVIDIA CEO Jensen Huang was holding forth from behind his sturdy kitchen counter.

The next, the kitchen and everything in it slid away, leaving Huang alone with the audience and NVIDIA’s DGX Station A100, a glimpse at an alternate digital reality.

For most, the metaverse is something seen in sci-fi movies. For entrepreneurs, it’s an opportunity. For gamers, a dream.

For NVIDIA artists, researchers and engineers on an extraordinarily tight deadline last spring, it was where they went to work — a shared virtual world they used to tell their story and a milestone for the entire company.

Designed to inform and entertain, NVIDIA’s GTC keynote is filled with cutting-edge demos highlighting advancements in supercomputing, deep learning and graphics.

“GTC is, first and foremost, our opportunity to highlight the amazing work that our engineers and other teams here at NVIDIA have done all year long,” said Rev Lebaredian, vice president of Omniverse engineering and simulation at NVIDIA.

With this short documentary, “Connecting in the Metaverse: The Making of the GTC Keynote,” viewers get the story behind the story. It’s a tale of how NVIDIA Omniverse, a tool for connecting to and describing the metaverse, brought it all together this year.

To be sure, you cant have a keynote without a flesh and blood person at the center. Through all but 14 seconds of the hour and 48 minute presentation from 1:02:41 to 1:02:55 — Huang himself spoke in the keynote.

Creating a Story in Omniverse

It starts with building a great narrative. Bringing forward a keynote-worthy presentation always takes intense collaboration. But this was unlike any other — packed not just with words and pictures — but with beautifully rendered 3D models and rich textures.

With Omniverse, NVIDIA’s team was able to collaborate using different industry content-creation tools like Autodesk Maya or Substance Painter while in different places.

Keynote slides were packed with beautifully rendered 3D models and rich textures.

“There are already great tools out there that people use every day in every industry that we want people to continue using,” said Lebaredian. “We want people to take these exciting tools and augment them with our technologies.”

These were enhanced by a new generation of tools, including Universal Scene Description (USD), Material Design Language (MDL) and NVIDIA RTX real-time ray-tracing technologies. Together, they allowed NVIDIA’s team to collaborate to create photorealistic scenes with physically accurate materials and lighting.

An NVIDIA DGX Station A100 Animation

Omniverse can create more than beautiful stills. The documentary shows how, accompanied by industry tools such as Autodesk Maya, Foundry Nuke, Adobe Photoshop, Adobe Premiere, and Adobe After Effects, it could stage and render some of the world’s most complex machines to create realistic cinematics.

With Omniverse, NVIDIA was able to turn a CAD model of the NVIDIA DGX Station A100 into a physically accurate virtual replica Huang used to give the audience a look inside.

Typically this type of project would take a team months to complete and weeks to render. But with Omniverse, the animation was chiefly completed by a single animator and rendered in less than a day.

Omniverse Physics Montage

More than just machines, though, Omniverse can model the way the world works by building on existing NVIDIA technologies. PhysX, for example, has been a staple in the NVIDIA gaming world for well over a decade. But its implementation in Omniverse brings it to a new level.

For a demo highlighting the current capabilities of PhysX 5 in Omniverse, plus a preview of advanced real-time physics simulation research, the Omniverse engineering and research teams re-rendered a collection of older PhysX demos in Omniverse.

The demo highlights key PhysX technologies such as Rigid Body, Soft Body Dynamics, Vehicle Dynamics, Fluid Dynamics, Blast’s Destruction and Fracture, and Flow’s combustible fluid, smoke and fire. As a result, viewers got a look at core Omniverse technologies that can do more than just show realistic-looking effects — they are true to reality, obeying the laws of physics in real-time.

DRIVE Sim, Now Built on Omniverse

Simulating the world around us is key to unlocking new technologies, and Omniverse is crucial to NVIDIA’s self-driving car initiative. With its PhysX and Photorealistic worlds, Omniverse creates the perfect environment for training autonomous machines of all kinds.

For this year’s DRIVE Sim on Omniverse demo, the team imported a map of the area surrounding a Mercedes plant in Germany. Then, using the same software stack that runs NVIDIA’s fleet of self-driving cars, they showed how the next generation of Mercedes cars would perform autonomous functions in the real world.

With DRIVE Sim, the team was able to test numerous lighting, weather and traffic conditions quickly — and show the world the results.

Creating the Factory of the Future with BMW Group

The idea of a “digital twin” has far-reaching consequences for almost every industry.

This year’s GTC featured a spectacular visionary display that exemplifies what the idea can do when unleashed in the auto industry.

The BMW Factory of the Future demo shows off the digital twin of a BMW assembly plant in Germany. Every detail, including layout, lighting and machinery, is digitally replicated with physical accuracy.

This “digital simulation” provides ultra-high fidelity and accurate, real-time simulation of the entire factory. With it, BMW can reconfigure assembly lines to optimize worker safety and efficiency, train factory robots to perform tasks, and optimize every aspect of plant operations.

Virtual Kitchen, Virtual CEO

The surprise highlight of GTC21 was a perfect virtual replica of Huang’s kitchen — the setting of the past three pandemic-era “kitchen keynotes” — complete with a digital clone of the CEO himself.

The demo is the epitome of what GTC represents: It combined the work of NVIDIA’s deep learning and graphics research teams with several engineering teams and the company’s incredible in-house creative team.

To create a virtual Jensen, teams did a full face and body scan to create a 3D model, then trained an AI to mimic his gestures and expressions and applied some AI magic to make his clone realistic.

Digital Jensen was then brought into a replica of his kitchen that was deconstructed to reveal the holodeck within Omniverse, surprising the audience and making them question how much of the keynote was real, or rendered.

“We built Omniverse first and foremost for ourselves here at NVIDIA,” Lebaredian said. “We started Omniverse with the idea of connecting existing tools that do 3D together for what we are now calling the metaverse.”

More and more of us will be able to do the same, accelerating more of what we do together. “If we do this right, we’ll be working in Omniverse 20 years from now,” Lebaredian said.

The post From Our Kitchen to Yours: NVIDIA Omniverse Changes the Way Industries Collaborate appeared first on The Official NVIDIA Blog.

Read More

Watch: Making Masterpieces in the Cloud With Virtual Reality

Immersive 3D design and character creation are going sky high this week at SIGGRAPH, in a demo showcasing NVIDIA CloudXR running on Google Cloud.

The clip shows an artist with an untethered VR headset creating a fully rigged character with Masterpiece Studio Pro, which is running remotely in Google Cloud and interactively streamed to the artist using CloudXR.

Bringing Characters to Life in XR

The demo focuses on an interactive technique known as digital sculpting, which uses software to create and refine a 3D model as if it were made of a real-life substance such as clay. But moving digital sculpting into a VR space creates a variety of challenges.

First, setting up the VR environment can be complicated and expensive. It typically requires dedicated physical space for wall-mounted sensors. If an artist wants to interact with the 3D model or move the character around, they can get tangled up in the cord that connects their VR headset to their workstation.

CloudXR, hosted from Google Cloud on a tetherless HMD, addresses these challenges by providing artists with the freedom to create from virtually anywhere. With a good internet connection, there’s no need for users to be physically tethered to an expensive workstation to have a seamless design session in an immersive environment.

Masterpiece Studio Pro is a fully immersive 3D creation pipeline that simplifies the character design process. From blocking in basic shapes to designing a fully textured and rigged character, artists can easily work on a character face-to-face in VR, providing a more intuitive experience.

In Masterpiece Studio Pro, artists can work on characters at any scale and use familiar tools and hand gestures to sculpt and pose models — just like they would with clay figures in real life. And drawing bones in position couldn’t be easier, because artists can reach right into the limbs of the creature to place them.

Getting Your Head in the Cloud

Built on NVIDIA RTX technology, CloudXR solves immersive design challenges by cutting the cord. Artists can work with a wireless, all-in-one headset, like the HTC VIVE Focus 3, without having to deal with the hassles of setting up a VR space.

And with CloudXR on Google Cloud, artists can rent an NVIDIA GPU on a Google Cloud Virtual Workstation, powered by NVIDIA RTX Virtual Workstation technology, and stream their work remotely. The VIVE Focus 3 is HTC’s latest standalone headset, which has 5K visuals and active cooling for long design sessions.

“We’re excited to show how complex creative workflows and high-quality graphics come together in the ultimate immersive experience — all running in the cloud,” said Daniel O’Brien, general manager at HTC Americas. “NVIDIA CloudXR and the VIVE Focus 3 provide a high quality experience to immerse artists in a seamless streaming experience.”

With Masterpiece Studio Pro running on Google Cloud, and streaming with NVIDIA CloudXR, users can enhance the workflow of creating characters in an immersive environment — one that’s more intuitive and productive than before.

Check out our other demos at SIGGRAPH, and learn more about NVIDIA CloudXR on Google Cloud.

The post Watch: Making Masterpieces in the Cloud With Virtual Reality appeared first on The Official NVIDIA Blog.

Read More