Dream State: Cybersecurity Vendors Detect Breaches in an Instant with NVIDIA Morpheus

In the geography of data center security, efforts have long focused on protecting north-south traffic — the data that passes between the data center and the rest of the network. But one of the greatest risks has become east-west traffic — network packets passing between servers within a data center.

That’s due to the growth of cloud-native applications built from microservices, whose connections across a data center are changing constantly. With a typical 1,000-server data center having over 1 billion network paths, it’s extremely difficult to write fixed rules that control the blast radius should a malicious actor get inside.

The new NVIDIA Morpheus AI application framework gives security teams complete visibility into security threats by bringing together unmatched AI processing and real-time monitoring on every packet through the data center. It lets them respond to anomalies and update policies immediately as threats are identified.

Combining the security superpowers of AI and NVIDIA BlueField data processing units (DPUs), Morpheus provides cybersecurity developers a highly optimized AI pipeline and pre-trained AI skills that, for the first time, allow them to instantaneously inspect all IP network communication through their data center fabric.

Bringing a new level of security to data centers, the framework provides dynamic protection, monitoring, adaptive policies and cyber defenses required to detect and remediate them.

Continuous AI Analytics on Network Traffic

Morpheus — which combines event streaming from NVIDIA Cumulus NetQ and GPU accelerated computing with RAPIDS data analytics pipelines, deep learning frameworks and Triton Inference Server, runs on mainstream NVIDIA-Certified enterprise servers — simplifies the analysis of computer logs and helps detect and mitigate security threats. Pre-trained AI models help find leaked credentials, keys, passwords, credit card numbers, bank account numbers and identify security policies that need to be hardened.

Integrating the framework into a third-party cybersecurity offering brings the world’s best AI computing to communication networks. Morpheus can receive rich telemetry feeds from every NVIDIA BlueField DPU-accelerated server in the data center without impacting server performance. BlueField-2 DPUs act both as a sensor to collect real-time packet flows and as a policy enforcement point to limit communication between any microservice container or virtual machine in a data center.

By placing BlueField-2 DPUs in servers across the data center, Morpheus can automatically write and change policies to immediately remediate security threats — from changing the logs being collected and altering the volume of ingesting, to dynamically redirecting certain log events, blocking traffic newly identified as malicious, rewriting rules to enforce policy updates, and more.

Accelerate and Secure the Data Center with NVIDIA BlueField DPUs 

The NVIDIA BlueField-2 DPU, available today, enables true software-defined, hardware-accelerated data center infrastructure. By having software-defined networking policies and telemetry collection run on the BlueField DPU before entering the server, the DPU offloads, accelerates, and isolates critical data center functions without burdening the server’s CPU. The DPU also extends the simple static security logging model and implements sophisticated dynamic telemetry that evolves with new policies being determined and adjusted.

Learn more about NVIDIA Morpheus and apply for early access, currently available in the U.S. and Israel.

The post Dream State: Cybersecurity Vendors Detect Breaches in an Instant with NVIDIA Morpheus appeared first on The Official NVIDIA Blog.

Read More

NVIDIA’s New CPU to ‘Grace’ World’s Most Powerful AI-Capable Supercomputer

NVIDIA’s new Grace CPU will power the world’s most powerful AI-capable supercomputer.

The Swiss National Computing Center’s (CSCS) new system will use Grace, a revolutionary Arm-based data center CPU introduced by NVIDIA today, to enable breakthrough research in a wide range of fields.

From climate and weather to materials sciences, astrophysics, computational fluid dynamics, life sciences, molecular dynamics, quantum chemistry and particle physics, as well as domains like economics and social sciences, Alps will play a key role in advancing science throughout Europe and worldwide when it comes online in 2023.

“We are thrilled to announce the Swiss National Supercomputing Center will build a supercomputer powered by Grace and our next-generation GPU,” NVIDIA CEO Jensen Huang said Monday during his keynote at NVIDIA’s GPU Technology Conference.

Alps will be built by Hewlett Packard Enterprise using the new HPE Cray EX supercomputer product line as well as the NVIDIA HGX supercomputing platform, including NVIDIA GPUs and the NVIDIA HPC SDK as well as the new Grace CPU.

The Alps system will replace CSCS’s existing Piz Daint supercomputer.

AI New Kind of Supercomputing

Alps is one of the new generation of machines that are expanding supercomputing beyond traditional modeling and simulation by taking advantage of GPU-accelerated deep learning.

“Deep learning is just an incredibly powerful set of tools that we add to the toolbox,” said CSCS Director Thomas Schulthess.

Taking advantage of the tight coupling between NVIDIA CPUs and GPUs, Alps is expected to be able to train GPT-3, the world’s largest natural language processing model, in only two days — 7x faster than NVIDIA’s 2.8-AI exaflops Selene supercomputer, currently recognized as the world’s leading supercomputer for AI by MLPerf.

CSCS users will be able to apply this incredible AI performance to a wide range of emerging scientific research that can benefit from natural language understanding.

This includes, for example, analyzing and understanding massive amounts of knowledge available in scientific papers and generating new molecules for drug discovery.

Soul of the New Machine

Based on the hyper-efficient Arm microarchitecture found in billions of smartphones and other edge computing devices, Grace will deliver 10x the performance of today’s fastest servers on the most complex AI and high-performance computing workloads.

Grace will support the next generation of NVIDIA’s coherent NVLink interconnect technology, allowing data to move more quickly between system memory, CPUs and GPUs.

And thanks to growing GPU support for data science acceleration at ever-larger scales, Alps will also be able to accelerate a bigger chunk of its users’ workflows, such as ingesting the vast quantities of data needed for modern supercomputing.

“The scientists will not only be able to carry out simulations, but also pre-process or post-process their data,” Schulthess said. “This makes the whole workflow more efficient for them.”

From Particle Physics to Weather Forecasts

CSCS has long supported scientists who are working at the cutting edge, particularly in materials science, weather forecasting and climate modeling, and understanding data streaming in from a new generation of scientific instruments.

CSCS designs and operates a dedicated system for numerical weather predictions (NWP) on behalf of MeteoSwiss, the Swiss meteorological service. This system has been running on GPUs since 2016.

That long-standing experience with operational NWP on GPUs will be key to future climate simulations as well — key not only to modeling long-term changes to climate, but to building models able to more accurately predict extreme weather events, saving lives.

One of that team’s goals is to run global climate models with a spatial resolution of 1 km that can map convective clouds such as thunderclouds.

The CSCS supercomputer is also used by Swiss scientists for the analysis of data from the Large Hadron Collider (LHC) at CERN, the European Council for Nuclear Research. It is the Swiss Tier-2 system in the World LHC Computing Grid.

Based in Geneva, the LHC — at $9 billion, one of the most expensive scientific instruments ever built — generates 90 petabytes of data a year.

Alps uses a new software-defined infrastructure that can support a wide range of projects.

As a result, in the future, different teams, such those from MeteoSwiss, will be able to use one or more partitions on a single, unified infrastructure, rather than different machines.

These can be virtual ad-hoc clusters for individual users or predefined clusters that research teams can put together with CSCS and then operate themselves.

 

 

 

 Featured image source: Steve Evans, from Citizen of the World.

 

The post NVIDIA’s New CPU to ‘Grace’ World’s Most Powerful AI-Capable Supercomputer appeared first on The Official NVIDIA Blog.

Read More

What Is Quantum Computing?

Twenty-seven years before Steve Jobs unveiled a computer you could put in your pocket, physicist Paul Benioff published a paper showing it was theoretically possible to build a much more powerful system you could hide in a thimble — a quantum computer.

Named for the subatomic physics it aimed to harness, the concept Benioff described in 1980 still fuels research today, including efforts to build the next big thing in computing: a system that could make a PC look in some ways quaint as an abacus.

Richard Feynman —  a Nobel Prize winner whose wit-laced lectures brought physics to a broad audience —  helped establish the field, sketching out how such systems could simulate quirky quantum phenomena more efficiently than traditional computers.

So, What Is Quantum Computing?

Quantum computing uses the physics that governs subatomic particles to perform sophisticated parallel calculations, replacing more simplistic transistors in today’s computers.

Quantum computers calculate using qubits, computing units that can be on, off or any value between, instead of the bits in traditional computers that are either on or off, one or zero. The qubit’s ability to live in the in-between state — called superposition — adds a powerful capability to the computing equation, making quantum computers superior for some kinds of math.

quantum computing definedWhat Does a Quantum Computer Do?

Using qubits, quantum computers could buzz through calculations that would take classical computers a loooong time — if they could even finish them.

For example, today’s computers use eight bits to represent any number between 0 and 255. Thanks to features like superposition, a quantum computer can use eight qubits to represent every number between 0 and 255, simultaneously.

It’s a feature like parallelism in computing: All possibilities are computed at once rather than sequentially, providing tremendous speedups.

So, while a classical computer steps through long division calculations one at a time to factor a humongous number, a quantum computer can get the answer in a single step. Boom!

That means quantum computers could reshape whole fields, like cryptography, that are based on factoring what are today impossibly large numbers.

A Big Role for Tiny Simulations

That could be just the start. Some experts believe quantum computers will bust through limits that now hinder simulations in chemistry, materials science and anything involving worlds built on the nano-sized bricks of quantum mechanics.

Quantum computers could even extend the life of semiconductors by helping engineers create more refined simulations of the quantum effects they’re starting to find in today’s smallest transistors.

Indeed, experts say quantum computers ultimately won’t replace classical computers, they’ll complement them. And some predict quantum computers will be used as accelerators much as GPUs accelerate today’s computers.

How Does Quantum Computing Work?

Don’t expect to build your own quantum computer like a DIY PC with parts scavenged from discount bins at the local electronics shop.

The handful of systems operating today typically require refrigeration that creates working environments just north of absolute zero. They need that computing arctic to handle the fragile quantum states that power these systems.

In a sign of how hard constructing a quantum computer can be, one prototype suspends an atom between two lasers to create a qubit. Try that in your home workshop!

Quantum computing takes nano-Herculean muscles to create something called entanglement. That’s when two or more qubits exist in a single quantum state, a condition sometimes measured by electromagnetic waves just a millimeter wide.

Crank up that wave with a hair too much energy and you lose entanglement or superposition, or both. The result is a noisy state called decoherence, the equivalent in quantum computing of the blue screen of death.

What’s the Status of Quantum Computers?

A handful of companies such as Alibaba, Google, Honeywell, IBM, IonQ and Xanadu operate early versions of quantum computers today.

Today they provide tens of qubits. But qubits can be noisy, making them sometimes unreliable. To tackle real-world problems reliably, systems need tens or hundreds of thousands of qubits.

Experts believe it could be a couple decades before we get to a high-fidelity era when quantum computers are truly useful.

quantum computing status
Quantum computers are slowly moving toward commercial use. (Source: ISSCC 2017 talk by Lieven Vandersypen.)

Predictions of when we reach so-called quantum computing supremacy — the time when quantum computers execute tasks classical ones can’t — is a matter of lively debate in the industry.

Accelerating Quantum Circuit Simulations Today

The good news is the world of AI and machine learning put a spotlight on accelerators like GPUs, which can perform many of the types of operations quantum computers would calculate with qubits.

So, classical computers are already finding ways to host quantum simulations with GPUs today. For example, NVIDIA ran a leading-edge quantum simulation on Selene, our in-house AI supercomputer.

NVIDIA announced in the GTC keynote the cuQuantum SDK to speed quantum circuit simulations running on GPUs. Early work suggests cuQuantum will be able to deliver orders of magnitude speedups.

The SDK takes an agnostic approach, providing a choice of tools users can pick to best fit their approach. For example, the state vector method provides high-fidelity results, but its memory requirements grow exponentially with the number of qubits.

That creates a practical limit of roughly 50 qubits on today’s largest classical supercomputers. Nevertheless we’ve seen great results (below) using cuQuantum to accelerate quantum circuit simulations that use this method.

quantum state vector results
State vector: 1,000 circuits, 36 qubits, depth m=10, complex 64 | CPU: Qiskit on dual AMD EPYC 7742 | GPU: Qgate on DGX A100

Researchers from the Jülich Supercomputing Centre will provide a deep dive on their work with the state vector method in session E31941 at GTC (free with registration).

A newer approach, tensor network simulations, use less memory and more computation to perform similar work.

Using this method, NVIDIA and Caltech accelerated a state-of-the-art quantum circuit simulator with cuQuantum running on NVIDIA A100 Tensor Core GPUs. It generated a sample from a full-circuit simulation of the Google Sycamore circuit in 9.3 minutes on Selene, a task that 18 months ago experts thought would take days using millions of CPU cores.

Quantum tensor chart
Tensor Network – 53 qubits, depth m=20 | CPU: Quimb on Dual AMD EPYC 7742 estimated | GPU: Quimb on DGX-A100

“Using the Cotengra/Quimb packages, NVIDIA’s newly announced cuQuantum SDK, and the Selene supercomputer, we’ve generated a sample of the Sycamore quantum circuit at depth m=20 in record time — less than 10 minutes,” said Johnnie Gray, a research scientist at Caltech.

“This sets the benchmark for quantum circuit simulation performance and will help advance the field of quantum computing by improving our ability to verify the behavior of quantum circuits,” said Garnet Chan, a chemistry professor at Caltech whose lab hosted the work.

NVIDIA expects the performance gains and ease of use of cuQuantum will make it a foundational element in every quantum computing framework and simulator at the cutting edge of this research.

Sign up to show early interest in cuQuantum here.

The post What Is Quantum Computing? appeared first on The Official NVIDIA Blog.

Read More

Drug Discovery Gets Jolt of AI via NVIDIA Collaborations with AstraZeneca, U of Florida Health

NVIDIA is collaborating with biopharmaceutical company AstraZeneca and the University of Florida’s academic health center, UF Health, on new AI research projects using breakthrough transformer neural networks.

Transformer-based neural network architectures — which have become available only in the last several years — allow researchers to leverage massive datasets using self-supervised training methods, avoiding the need for manually labeled examples during pre-training. These models, equally adept at learning the syntactic rules to describe chemistry as they are at learning the grammar of languages, are finding applications across research domains and modalities.

NVIDIA is collaborating with AstraZeneca on a transformer-based generative AI model for chemical structures used in drug discovery that will be among the very first projects to run on Cambridge-1, which is soon to go online as the UK’s largest supercomputer. The model will be open sourced, available to researchers and developers in the NVIDIA NGC software catalog, and deployable in the NVIDIA Clara Discovery platform for computational drug discovery.

Separately, UF Health is harnessing NVIDIA’s state-of-the-art Megatron framework and BioMegatron pre-trained model — available on NGC — to develop GatorTron, the largest clinical language model to date.

New NGC applications include AtacWorks, a deep learning model that identifies accessible regions of DNA, and MELD, a tool for inferring the structure of biomolecules from sparse, ambiguous or noisy data.

Megatron Model for Molecular Insights

The MegaMolBART drug discovery model being developed by NVIDIA and AstraZeneca is slated for use in reaction prediction, molecular optimization and de novo molecular generation. It’s based on AstraZeneca’s MolBART transformer model and is being trained on the ZINC chemical compound database — using NVIDIA’s Megatron framework to enable massively scaled-out training on supercomputing infrastructure.

The large ZINC database allows researchers to pretrain a model that understands chemical structure, bypassing the need for hand-labeled data. Armed with a statistical understanding of chemistry, the model will be specialized for a number of downstream tasks, including predicting how chemicals will react with each other and generating new molecular structures.

“Just as AI language models can learn the relationships between words in a sentence, our aim is that neural networks trained on molecular structure data will be able to learn the relationships between atoms in real-world molecules,” said Ola Engkvist, head of molecular AI, discovery sciences, and R&D at AstraZeneca. “Once developed, this NLP model will be open source, giving the scientific community a powerful tool for faster drug discovery.”

The model, trained using NVIDIA DGX SuperPOD, gives researchers ideas for molecules that don’t exist in databases but could be potential drug candidates. Computational methods, known as in-silico techniques, allow drug developers to search through more of the vast chemical space and optimize pharmacological properties before shifting to expensive and time-consuming lab testing.

This collaboration will use the NVIDIA DGX A100-powered Cambridge-1 and Selene supercomputers to run large workloads at scale. Cambridge-1 is the largest supercomputer in the U.K., ranking No. 3 on the Green500 and No. 29 on the TOP500 list of the world’s most powerful systems. NVIDIA’s Selene supercomputer topped the most recent Green500 and ranks fifth on the TOP500.

Language Models Speed Up Medical Innovation

UF Health’s GatorTron model — trained on records from more than 50 million interactions with 2 million patients — is a breakthrough that can help identify patients for lifesaving clinical trials, predict and alert health teams about life-threatening conditions, and provide clinical decision support to doctors.

“GatorTron leveraged over a decade of electronic medical records to develop a state-of-the-art model,” said Joseph Glover, provost at the University of Florida, which recently boosted its supercomputing facilities with NVIDIA DGX SuperPOD. “A tool of this scale will enable healthcare researchers to unlock insights and reveal previously inaccessible trends from clinical notes.”

Beyond clinical medicine, the model also accelerates drug discovery by making it easier to rapidly create patient cohorts for clinical trials and for studying the effect of a certain drug, treatment or vaccine.

It was created using BioMegatron, the largest biomedical transformer model ever trained, developed by NVIDIA’s applied deep learning research team using data from the PubMed corpus. BioMegatron is available on NGC through Clara NLP, a collection of NVIDIA Clara Discovery models pretrained on biomedical and clinical text.

“The GatorTron project is an exceptional example of the discoveries that happen when experts in academia and industry collaborate using leading-edge artificial intelligence and world-class computing resources,” said David R. Nelson, M.D., senior vice president for health affairs at UF and president of UF Health. “Our partnership with NVIDIA is crucial to UF emerging as a destination for artificial intelligence expertise and development.”

Powering Drug Discovery Platforms

NVIDIA Clara Discovery libraries and NVIDIA DGX systems have been adopted by computational drug discovery platforms, too, boosting pharmaceutical research.

  • Schrödinger, a leader in chemical simulation software development, today announced a strategic partnership with NVIDIA that includes research in scientific computing and machine learning, optimizing of Schrödinger applications on NVIDIA platforms, and a joint solution around NVIDIA DGX SuperPOD to evaluate billions of potential drug compounds within minutes.
  • Biotechnology company Recursion has installed BioHive-1, a supercomputer based on the NVIDIA DGX SuperPOD reference architecture that, as of January, is estimated to rank at No. 58 on the TOP500 list of the world’s most powerful computer systems. BioHive-1 will allow Recursion to run within a day deep learning projects that previously took a week to complete using its existing cluster.
  • Insilico Medicine, a partner in the NVIDIA Inception accelerator program, recently announced the discovery of a novel preclinical candidate to treat idiopathic pulmonary fibrosis — the first example of an AI-designed molecule for a new disease target nominated for clinical trials. Compounds were generated on a system powered by NVIDIA Tensor Core GPUs, taking less than 18 months and under $2 million from target hypothesis to preclinical candidate selection.
  • Vyasa Analytics, a member of the NVIDIA Inception accelerator program, is using Clara NLP and NVIDIA DGX systems to give its users access to pretrained models for biomedical research. The company’s GPU-accelerated Vyasa Layar Data Fabric is powering solutions for multi-institutional cancer research, clinical trial analytics and biomedical data harmonization.

Learn more about NVIDIA’s work in healthcare at this week’s GPU Technology Conference, which kicks off with a keynote address by NVIDIA CEO Jensen Huang. Registration is free. The healthcare track includes 16 live webinars, 18 special events and over 100 recorded sessions.

Subscribe to NVIDIA healthcare news and follow NVIDIA Healthcare on Twitter.

The post Drug Discovery Gets Jolt of AI via NVIDIA Collaborations with AstraZeneca, U of Florida Health appeared first on The Official NVIDIA Blog.

Read More

An Engine of Innovation: Sony Levels Up for the AI Era

If you want to know what the next big thing will be, ask someone at a company that invents it time and again.

“AI is a key tool for the next era, so we are providing the computing resources our developers need to generate great AI results,” said Yuichi Kageyama, general manager of Tokyo Laboratory 16, in R&D Center for Sony Group Corporation.

Called GAIA internally, the lab’s computing resources act as a digital engine serving all Sony Group companies. And it’s about to get a second fuel injection of accelerated computing for AI efforts across the corporation.

Sony’s engineers are packing machine-learning smarts into products from its Xperia smartphones, its entertainment robot, aibo, and a portfolio of imaging components for everything from professional and consumer cameras to factory automation and satellites. It’s even using AI to build the next generation of advanced imaging chips.

More Zip, Fewer Tolls

To move efficiently into the AI era, Sony is installing a cluster of NVIDIA DGX A100 systems linked on an NVIDIA Mellanox  InfiniBand network. It expands an existing system now running at near full utilization with NVIDIA V100 Tensor Core GPUs, commissioned in October when the company brought AI training in house.

“When we were using cloud services, AI developers worried about the costs, but now they can focus on AI development on GAIA,” said Kageyama.

An in-house AI engine torques performance, too. One team designed a deep-learning model for delivering super-resolution images and trained it nearly 16x faster by adding more resources to the job, shortening a month’s workload to a day.

“With the computing power of the DGX A100, its expanded GPU memory and faster InfiniBand networking, we expect to see even greater performance on larger datasets,” said Yoshiki Tanaka, who oversees HPC and distributed deep learning technologies for Sony’s developers.

Powering an AI Pipeline

Sony posted fast speeds in deep learning back in 2018, accelerating its Neural Network Libraries on a system at Japan’s National Institute of Advanced Industrial Science and Technology. And it’s already rolling out products powered with machine learning, such as its Airpeak drone for professional filmmakers shown at CES this year.

There’s plenty more to come.

“We will see good results in our fiscal 2021 because we have collaborations with many business teams who have started some good projects,” Kageyama said.

NVIDIA is putting its shoulder to the wheel with software and services to “build a culture of using GPUs,” he added.

For example, Sony developers use NGC, NVIDIA’s online container registry, for all the software components they need to get an AI app up and running.

Sony even created a container of its own, now available on NGC, sporting its Neural Network Libraries and other utilities. It supplements NVIDIA’s containers for work in popular environments like PyTorch and TensorFlow.

Drivers Give a Thumbs Up

Developers tell Kageyama’s team that having their code in one place helps simplify and speed their work.

Some researchers use the system for high performance computing, tapping into NVIDIA’s CUDA software that accelerates a diverse set of technical applications including AI.

To keep it all running smoothly, NVIDIA provided a job scheduler as well as additions for Sony to NVIDIA’s libraries for scaling apps across multiple GPUs.

“Good management software is important for achieving fairness and high utilization on such a complex system,” said Masahiro Hara, who leads development of the GAIA system.

An Eye Toward Analytics

NVIDIA also helped Sony create training programs on how to use its software on GAIA.

Looking ahead, Sony is interested in expanding its work in data analytics and simulations. It’s evaluating RAPIDS, open-source software NVIDIA helped design to let Python programmers access the power of GPUs for data science.

At the end of a work-from-home day keeping Sony ahead of the pack in AI, Kageyama enjoys playing with his kids who keep their dad on his digital toes. “I’m a beginner in Minecraft, and they’re much better than me,” he said.

The post An Engine of Innovation: Sony Levels Up for the AI Era appeared first on The Official NVIDIA Blog.

Read More

Siege the Day as Stronghold Series Headlines GFN Thursday

It’s Thursday, which means it’s GFN Thursday — when GeForce NOW members can learn what new games and updates are streaming from the cloud.

This GFN Thursday, we’re checking in on one of our favorite gaming franchises, the Stronghold series from Firefly Studios. We’re also sharing some sales Firefly is running on the Stronghold franchise. And of course, we have more games joining the GeForce NOW library.

Fortify Your Castle

Stronghold: Warlords on GeForce NOW
Build your castle, defend it, and expand your kingdom. That’s the Stronghold way.

The Stronghold series focuses on “castle sim” gameplay, challenging players to become the lord of their kingdom. Your goal is to build a stable economy and raise a strong military to defend against invaders, destroy enemy castles and accomplish your mission objectives.

Firefly’s latest entry in the series, Stronghold: Warlords, expands on the formula by granting options to recruit, upgrade and command AI-controlled warlords that increase your influence and give you more options in each battlefield.

As you answer threats from Great Khans, Imperial warlords and Shōgun commanders, you’ll lead your forces to victory and defend against massive sieges. It wouldn’t be a Stronghold game if you didn’t watch the great hordes of your enemies collapse as they crash against your defenses.

Stronghold: Warlords joined the GeForce NOW library at the game’s release on March 9, and members can muster their forces across nearly all of their devices, even low-powered rigs or Macs.

“Rather than porting to new platforms, we love that GeForce NOW can stream the real PC version to players regardless of their system,” said Nicholas Tannahill, marketing director at Firefly Studios. “We can focus on improving one build, whilst our players can take their kingdoms with them.”

A Kingdom for the Ages

Stronghold: Warlords on GeForce NOW
GeForce NOW members can oversee each skirmish in Stronghold: Warlords across all their supported devices.

Firefly only released Stronghold: Warlords a month ago, but has robust plans for content updates for players.

Those plans include a free update on April 13 that adds a new AI character, Sun Tzu, plus AI invasions in Free Build mode, and a new Free Build map. This update is just the beginning for how Firefly will continue giving gamers new challenges to master as they grow their kingdom.

To celebrate our work with Firefly to bring the Stronghold franchise to GeForce NOW, the studio’s games are currently on sale on Steam. Members can find more info on the Firefly games streaming from the cloud, and their current Steam discounts, below.

Let’s Play Today

Of course, GFN Thursday has even more games in store for members. In addition to the rest of the Stronghold franchise, members can look for the following games to join our library:

  • Aron’s Adventure (day-and-date release on Steam, April 7)
  • The Legend of Heroes: Trails of Cold Steel IV (day-and-date release on Steam, April 9)
  • EARTH DEFENSE FORCE: IRON RAIN (Steam)
  • Spintires (Steam)
  • Stronghold Crusader HD (80 percent off on Steam for a limited time)
  • Stronghold 2: Steam Edition (60 percent off on Steam for a limited time)
  • Stronghold HD (70 percent off on Steam for a limited time)
  • Stronghold Crusader 2 (90 percent off on Steam for a limited time)
    • Stronghold Crusader 2 DLC – (20-50 percent off on Steam for a limited time)
  • Stronghold 3 Gold (70 percent off on Steam for a limited time)
  • Stronghold Kingdoms (free-to-play on Steam)
    • Stronghold Kingdoms (Starter Pack) (70 percent off on Steam for a limited time)
  • Stronghold Legends: Steam Edition (60 percent off on Steam for a limited time)
  • UNDER NIGHT IN-BIRTH Exe:Late[cl-r] (Steam)

Will you accept the challenge and build your kingdom in a Stronghold game this weekend? Let us know on Twitter or in the comments below.

The post Siege the Day as Stronghold Series Headlines GFN Thursday appeared first on The Official NVIDIA Blog.

Read More

NVIDIA’s Shalini De Mello Talks Self-Supervised AI, NeurIPS Successes

Shalini De Mello, a principal research scientist at NVIDIA who’s made her mark inventing computer vision technology that contributes to driver safety, finished 2020 with a bang — presenting two posters at the prestigious NeurIPS conference in December.

A 10-year NVIDIA veteran, De Mello works on self-supervised and few-shot learning, 3D reconstruction, viewpoint estimation and human-computer interaction.

She told NVIDIA AI Podcast host Noah Kravitz about her NeurIPS submissions on reconstructing 3D meshes and self-learning transformations for improving head and gaze redirection — both significant challenges for computer vision.

De Mello’s first poster demonstrates how she and her team successfully manage to recreate 3D models in motion without requiring annotations of 3D mesh, 2D keypoints or camera pose — even on such kinetic figures as animals in the wild.

The second poster takes on the issue of datasets in which large portions are unlabeled — focusing specifically on datasets consisting of images of human faces with many variables, including lighting, reflections and head and gaze orientation. De Mello achieved an architecture that could self-learn these variations and control them.

De Mello intends to continue focusing on creating self-supervising AI systems that require less data to achieve the same quality output, which she envisions ultimately helping to reduce bias in AI algorithms.

Key Points From This Episode:

  • Early in her career at NVIDIA, De Mello noticed that technologies for looking inside the car cabin weren’t as mature as the algorithms for automotive vision outside the car. She focused her research on the former, leading to the creation of NVIDIA’s DRIVE IX product for AI-based automotive interfaces in cars.
  • While science has been a lifelong passion, De Mello discovered an appreciation for art and found the perfect blend of the two in signal and image processing. She could immediately see the effects of AI on visual content.

Tweetables:

“We as humans are able to learn effectively with less data — how can we make learning systems do the same? This is a fundamental question to answer for the viability of AI” [29:29]

“Looking back at my career, the one thing I’ve learned is that it’s really important to follow your passion” [32:37]

You Might Also Like:

Behind the Scenes at NeurIPS with NVIDIA and CalTech’s Anima Anandkumar

Anima Anandkumar, NVIDIA’s director of machine learning research and Bren professor at CalTech’s CMS Department, joins AI Podcast host Noah Kravitz to talk about NeurIPS 2020 and to discuss what she sees as the future of AI.

MIT’s Jonathan Frankle on “The Lottery Hypothesis”

Jonathan Frankle, a Ph.D. student at MIT, discusses a paper he co-authored focusing on “The Lottery Hypothesis,” which promises to help advance our understanding of why neural networks, and deep learning, works so well.

NVIDIA’s Neda Cvijetic Explains the Science Behind Self-Driving Cars

Neda Cvijetic, senior manager of autonomous vehicles at NVIDIA, leads the NVIDIA DRIVE Labs series of videos and blogs that break down the science behind autonomous vehicles. She takes NVIDIA AI Podcast Noah Kravitz behind the wheel of a (metaphorical) self-driving car.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post NVIDIA’s Shalini De Mello Talks Self-Supervised AI, NeurIPS Successes appeared first on The Official NVIDIA Blog.

Read More

What Will NVIDIA CEO Jensen Huang Cook Up This Time at NVIDIA GTC?

Don’t blink. Accelerated computing is moving innovation forward faster than ever.

And there’s no way to get smarter, quicker, about how it’s changing your world than to tune in to NVIDIA CEO Jensen Huang’s GTC keynote Monday, April 12, starting at 8:30 a.m. PT.

The keynote, delivered again from the kitchen in Huang’s home, will kick off a conference with more than 1,500 sessions covering just about every innovation — from quantum computing to AI — that benefits from moving faster.

Factories of the Future and More….

In his address, Huang will share the company’s vision for the future of computing from silicon to software to services, and from the edge to the data center to the cloud.

A highlight: Huang will detail NVIDIA’s vision for manufacturing and you’ll get a chance to meet “Dave,” who is exploring the Factory of the Future.

Be on the Hunt for Some Surprises

And, to have a little quick fun, we’ve added a few surprises – so be on the lookout. Watch the @NVIDIAGTC Twitter handle for clues and more details.

Stick Around

There’s no need to register for GTC to watch the keynote. But if you’re inspired, it’s a great way to explore all the trends Huang will touch on at GTC — and more.

For more than a decade, GTC has been the place to see innovations that have changed the world. More than 100,000 developers, researchers and IT professionals have already registered to join this year’s conference.

Registration is free and open to all.

Where to Watch

Mark the date — April 12 at 8:30 a.m. PT — on your calendar. Here’s where you can watch live:

U.S.:

Latin America:

Asia:

See you there.

 

The post What Will NVIDIA CEO Jensen Huang Cook Up This Time at NVIDIA GTC? appeared first on The Official NVIDIA Blog.

Read More

NVIDIA-Powered Systems Ready to Bask in Ice Lake

Data-hungry workloads such as machine learning and data analytics have become commonplace. To cope with these compute-intensive tasks, enterprises need accelerated servers that are optimized for high performance.

Intel’s 3rd Gen Intel Xeon Scalable processors (code-named “Ice Lake”), launched today, are based on a new architecture that enables a major leap in performance and scalability. These new systems are an ideal platform for enterprise accelerated computing, when enhanced with NVIDIA GPUs and networking, and include features that are well-suited for GPU-accelerated applications.

Ice Lake platform benefits for accelerated computing.

The move to PCIe Gen 4 doubles the data transfer rate from the prior generation, and now matches the native speed of NVIDIA Ampere architecture-based GPUs, such as the NVIDIA A100 Tensor Core GPU. This speeds throughput to and from the GPU, which is especially important to machine learning workloads that involve vast amounts of training data. This also improves transfer speeds for data-intensive tasks like 3D design for NVIDIA RTX Virtual Workstations accelerated by the powerful NVIDIA A40 data center GPU and others.

Faster PCIe performance also accelerates GPU direct memory access transfers. Faster I/O communication of video data between the GPU and GPUDirect for Video-enabled devices delivers a powerful solution for live broadcasts.

The higher data rate additionally enables networking speeds of 200Gb/s, such as in the NVIDIA ConnectX family of HDR 200Gb/s InfiniBand adapters and 200Gb/s Ethernet NICs, as well as the upcoming NDR 400Gb/s InfiniBand adapter technology.

The Ice Lake platform supports 64 PCIe lanes, so more hardware accelerators – including GPUs and networking cards – can be installed in the same server, enabling a greater density of acceleration per host. This also means that greater user density can be achieved for multimedia-rich VDI environments accelerated by the latest NVIDIA GPUs and NVIDIA Virtual PC software.

These enhancements allow for unprecedented scaling of GPU acceleration. Enterprises can tackle the biggest jobs by using more GPUs within a host, as well as more effectively connecting GPUs across multiple hosts.

Intel has also made Ice Lake’s memory subsystem more performant. The number of DDR4 memory channels has increased from six to eight and the data transfer rate for memory now has a maximum speed at 3,200 MHz.  This allows for greater bandwidth of data transfer from main memory to the GPU and networking, which can increase throughput for data-intensive workloads.

Finally, the processor itself has improved in ways that will benefit accelerated computing workloads. The 10-15 percent increase in instructions per clock can lead to an overall performance improvement of up to 40 percent for the CPU portion of accelerated workloads. There are also more cores — up to 76 in the Platinum 9xxx variant. This will enable a greater density of virtual desktop sessions per host, so that GPU investments in a server can go further.

We’re excited to see partners already announcing new Ice Lake systems accelerated by NVIDIA GPUs, including Dell Technologies with the Dell EMC PowerEdge R750xa, purpose built for GPU acceleration, and new Lenovo ThinkSystem Servers, built on 3rd Gen Intel Xeon Scalable processors and PCIe Gen4, with several models powered by NVIDIA GPUs.

Intel’s new Ice Lake platform, with accelerator hardware, is a great choice for enterprise customers who plan to update their data center. Its new architectural enhancements enable enterprises to run accelerated applications with better performance and at data center scale and our mutual customers will be able to quickly experience its benefits.

Visit the NVIDIA Qualified Server Catalog to see a list of GPU-accelerated server models with Ice Lake CPUs, and be sure to check back as more systems are added.

The post NVIDIA-Powered Systems Ready to Bask in Ice Lake appeared first on The Official NVIDIA Blog.

Read More

World of Difference: GTC to Spotlight AI Developers in Emerging Markets

Startups don’t just come from Silicon Valley — they hail from Senegal, Saudi Arabia, Pakistan, and beyond. And hundreds will take the stage at the GPU Technology Conference.

GTC, running April 12-16, will spotlight developers and startups advancing AI in Africa, Latin America, Southeast Asia, and the Middle East. Registration is free, and provides access to 1,500+ talks, as well as dozens of hands-on training sessions, demos and networking events.

Several panels and talks will focus on supporting developer ecosystems in emerging markets and opening access for communities to solve pressing regional problems with AI.

NVIDIA Inception, an acceleration platform for AI and data science startups, will host an Emerging Markets Pavilion where attendees can catch on-demand lightning talks from startup founders in healthcare, retail, energy and financial services. And developers from around the world will have access to online training programs through the NVIDIA Deep Learning Institute.

Beyond GTC, NVIDIA is exploring opportunities and pathways to reach data science and deep learning developers around the world. We’re working with groups like the data science competition platform Zindi to sponsor AI hackathons in Africa — and so are our NVIDIA Inception members, like Instadeep, an AI startup with offices in Tunisia, Nigeria, Kenya, England and France.

Programs like these, including the NVIDIA Developer Program, aim to support the next generation of developers, innovators and leaders with the resources to drive AI breakthroughs worldwide.

Focus on Emerging Developer Communities

While AI developers and startup founders come from diverse backgrounds and places, not all receive equivalent support and opportunities. At GTC, speakers from NVIDIA, Amazon Web Services, Google and Microsoft will join nonprofit founders and startup CEOs to discuss how we can bolster developer ecosystems in emerging markets.

Session topics include:

Startups Star in the NVIDIA Inception Pavilion

The NVIDIA Inception program includes more than 7,500 AI and data science startups from around the world. More than 300 will present at GTC.

It all kicks off after NVIDIA CEO Jensen Huang’s opening keynote on April 12, with a panel led by Jeff Herbst, our VP of business development and head of NVIDIA Inception.

The panel, AI Startups: NVIDIA Inception Insights and Trends from Around the World, will discuss efforts and challenges to nurture a broad cohort of young companies, including those from underserved and underrepresented markets. In addition to reps from NVIDIA, the panel will include Noga Tal, global director of partnerships at Microsoft for Startups; Maribel Lopez, co-founder of the Emerging Technology Research Council; and Badr Idrissi, CEO of Atlan Space, a Morocco-based NVIDIA Inception member.

Hosted by NVIDIA Inception, a virtual Emerging Markets Pavilion will feature global startups including:

Visit the GTC site to learn more and register.

The post World of Difference: GTC to Spotlight AI Developers in Emerging Markets appeared first on The Official NVIDIA Blog.

Read More