Innovators, Researchers, Industry Leaders: Meet the Women Headlining at GTC

An A-list of female AI researchers and industry executives will take the stage at next month’s GPU Technology Conference to share the latest breakthroughs in every industry imaginable.

Recognized in Forbes as a top conference for women to attend to further their careers in AI, GTC runs online, April 12-16, and is set to draw tens of thousands of technologists, business leaders and creators from around the world.

GTC will kick off with a livestreamed keynote by NVIDIA founder and CEO Jensen Huang, and feature 1,300 speaker sessions, including hundreds from standout women speakers.

Industry, Public Sector Luminaries

AI and accelerated applications are driving innovations across the public and private sectors, spanning healthcare, robotics, design and more. At GTC, Danielle Merfeld, vice president and chief technology officer of GE Renewable Energy, will present a session, as will Jie Chen, managing director in corporate model risk at Wells Fargo.

Audi’s Hildegard Wortmann, member of the company’s board of management, will deliver a talk on digitalization, electrification and sustainability in the automotive industry. And Vicki Dobbs Beck, executive in charge at visual effects studio ILMxLAB, will speak about immersive entertainment.

Speakers from the federal sector include Lauren Knausenberger, chief information officer of the U.S. Air Force, and Suzette Kent, former federal chief information officer of the United States.

Pioneering AI, HPC Researchers

Several research luminaries will speak at GTC, including Rommie Amaro of the University of California at San Diego, winner of a special Gordon Bell Prize for work fighting COVID-19. So too will another pioneer in AI and healthcare, Daphne Koller, adjunct professor of computer science and pathology at Stanford University.

Leaders from NVIDIA Research — including Anima Anandkumar, director of machine learning research; Sanja Fidler, director of AI; and Kate Kallot, head of emerging areas — will present groundbreaking work. Raquel Urtasun, professor at the University of Toronto, will discuss her work in machine perception for self-driving cars.

Female Founders

The number of female-founded companies has doubled since 2009, a Crunchbase report found in 2019. GTC features female founders from around the world, such as Nigeria-based Ada Nduka Oyom, founder of the nonprofit organization She Code Africa.

Nora Khaldi, founder and CEO of Ireland-based startup Nuritas, will speak at GTC about how AI startups are revolutionizing healthcare around the world. And from Silicon Valley, Inga Petryaevskaya, co-founder and CEO of virtual reality startup Tvori, will present a talk on the role of AI startups in media and entertainment.

Advancing Inclusion at GTC

In addition to women speakers, GTC has grown its representation of female attendees by almost 4x since 2017 — and strengthened support for underrepresented developers and scientists through education, training and networking opportunities.

We’ve partnered with women in a variety of tech groups, and boosted GTC participation across underrepresented developer communities by working with organizations such as the National Society of Black Engineers, Black in AI and Latinx in AI to offer access to GTC content and online training from NVIDIA’s Deep Learning Institute.

This GTC, NVIDIA’s Women in Technology employee community is hosting an all-female panel sharing tools, strategies and frameworks to keep up with the pace of AI innovation during the pandemic. The group will also hold a networking event hosted by NVIDIA women and fellow industry leaders.

Registration for GTC is free, and provides access to more than 1,300 talks as well as dozens of hands-on training sessions, demos and networking events.

Main image shows GTC speakers (clockwise from top left) Kate Kallot, Raquel Urtasun, Jie Chen and Daphne Koller.

The post Innovators, Researchers, Industry Leaders: Meet the Women Headlining at GTC appeared first on The Official NVIDIA Blog.

Read More

How Suite It Is: NVIDIA and VMware Deliver AI-Ready Enterprise Platform

As enterprises modernize their data centers to power AI-driven applications and data science, NVIDIA and VMware are making it easier than ever to develop and deploy a multitude of different AI workloads in the modern hybrid cloud.

The companies have teamed up to optimize the just-announced update to vSphere — VMware vSphere 7 Update 2 — for AI applications with the NVIDIA AI Enterprise software suite (see Figure 1 below). This combination enables scale-out, multi-node performance and compatibility for a vast set of accelerated CUDA applications, AI frameworks, models and SDKs for the hundreds of thousands of enterprises that use vSphere for server virtualization.

Through this first-of-its-kind industry collaboration, AI researchers, data scientists and developers gain the software they need to deliver successful AI projects, while IT professionals acquire the ability to support AI using the tools they’re most familiar with for managing large-scale data centers, without compromise.

VMware + NVIDIA AI-Ready Platform
Figure 1: NVIDIA AI Enterprise for VMware vSphere runs on NVIDIA-Certified Systems to make it easy for IT to deploy virtualized AI at scale.

One Suite Package for AI Enterprise

NVIDIA AI Enterprise is a comprehensive suite of enterprise-grade AI tools and frameworks that optimize business processes and boost efficiency for a broad range of key industries, including manufacturing, logistics, financial services, retail and healthcare. With NVIDIA AI Enterprise, scientists and AI researchers have easy access to NVIDIA’s leading AI tools to power AI development across projects ranging from advanced diagnostics, smart factories, fraud detection and more.

The solution overcomes the complexity of deploying individual AI applications, as well as the potential failures that can result from having to manually provision and manage different applications and infrastructure software that can often be incompatible.

With NVIDIA AI Enterprise running on vSphere, customers can avoid silos of AI-specific systems that are difficult to manage and secure. They can also mitigate the risks of shadow AI deployments, where data scientists and machine learning engineers procure resources outside of the IT ecosystem.

Licensed by NVIDIA, AI Enterprise for vSphere is supported on NVIDIA-Certified Systems which include mainstream servers from Dell Technologies, HPE, Lenovo and Supermicro. This allows even the most modern, demanding AI applications to be easily supported just like traditional enterprise workloads on a common infrastructure and using data center management tools like VMware vCenter.

IT can manage availability, optimize resource allocation and enable the security of its valuable IP and customer data for AI workloads running on premises and in the hybrid cloud.

Scalable, Multi-Node, Virtualized AI Performance

NVIDIA AI Enterprise enables virtual workloads to run at near bare-metal performance on vSphere with support for the record-breaking performance of NVIDIA A100 GPUs for AI and data science (see Chart 1 below). AI workloads can now scale across multiple nodes, allowing even the largest deep learning training models to run on VMware Cloud Foundation.

AI Enterprise VMware vSphere
Chart 1: With NVIDIA AI Enterprise for vSphere, distributed deep learning training scales linearly, across multiple nodes, and delivers performance that is indistinguishable from bare metal.

AI workloads come in all sizes with a wide variety of data requirements. Some process images, like live traffic reporting systems or online shopping recommender systems. Others are text-based, like a customer service support system powered by conversational AI.

Training an AI model can be incredibly data intensive and requires scale-out performance across multiple GPUs in multiple nodes. Running inference on a model in deployment usually requires fewer computing resources and may not need the power of a whole GPU.

Through the collaboration between NVIDIA and VMware, vSphere is the only server virtualization software to provide hypervisor support for live migration with NVIDIA Multi-Instance GPU technology. With MIG, each A100 GPU can be partitioned into up to seven instances at the hardware level to maximize efficiency for workloads of all sizes.

Extensive Resources for AI Applications and Infrastructure

NVIDIA AI Enterprise includes key technologies and software from NVIDIA for the rapid deployment, management and scaling of AI workloads in virtualized data centers running on VMware Cloud Foundation.

NVIDIA AI Enterprise is a certified, end-to-end suite of key NVIDIA AI technologies and applications as well as enterprise support services.

Customers who would like to adopt NVIDIA AI Enterprise as they upgrade to vSphere 7 U2 can contact NVIDIA and VMware to discuss their needs.

For more information on bringing AI to VMware-based data centers, read the NVIDIA developer blog and the VMware vSphere 7 U2 blog.

To further develop AI expertise with NVIDIA and VMware, register for free for GTC 2021.

The post How Suite It Is: NVIDIA and VMware Deliver AI-Ready Enterprise Platform appeared first on The Official NVIDIA Blog.

Read More

Artists: Unleash Your Marble Arts in NVIDIA Omniverse Design Challenge

Artists, this is your chance to push past creative limits — and win great prizes — while exploring NVIDIA Omniverse through a new design contest.

Called “Create with Marbles,” the contest is set in Omniverse, the groundbreaking platform for virtual collaboration, creation and simulation, and based on the Marbles RTX demo that first previewed at GTC last year and showcases how complex physics can be simulated in a real-time, ray-traced world.

In the design challenge, artists can experience how Omniverse is transforming the future of real-time graphics. The best designed sets will win fantastic prizes: the top three entries will receive an NVIDIA RTX A6000, GeForce RTX 3090 and GeForce RTX 3080 GPU, respectively.

Creators from all over the world are invited to experiment with Omniverse and take the first step by staging a scene with all the objects in the original Marbles demo. No modeling or animation is required — challenge participants will have access to over 100 Marbles assets, which they can use to easily assemble their own scene.

The “Create with Marbles” contest is the perfect opportunity for creators to dive into Omniverse and familiarize themselves with the platform. Install Omniverse Create to access any of the available Marbles assets. Leverage the tools built into Omniverse Create, or pick your favorite content creation application that’s available on Omniverse Connector. Then render photorealistic graphics using RTX Renderer, and submit your final scene to showcase your work.

Explore Omniverse and use existing assets to unleash your artistic creativity through scene composition, camera settings and lighting.

Entries will be judged on various criteria, including the use of Omniverse Create and Marbles assets, the quality of the final render and overall originality.

Guest judges include Rachel Rose, the research and development supervisor at Industrial Light & Magic; Yangtian Li, senior concept artist at Singularity 6; and Lynn Yang, freelance concept artist and matte painter.

Deadline for submissions is April 2, 2021. The winners of the contest will be announced on the contest winners page in mid-April.

Learn more about the “Create with Marbles” challenge, and start creating in Omniverse today.

The post Artists: Unleash Your Marble Arts in NVIDIA Omniverse Design Challenge appeared first on The Official NVIDIA Blog.

Read More

In Genomics Breakthrough, Harvard, NVIDIA Researchers Use AI to Spot Active Areas in Cell DNA

Like a traveler who overpacks a suitcase with a closet’s worth of clothes, most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus.

But an individual cell pulls out only the subsection of genetic apparel that it needs to function, with each cell type — such as liver, blood or skin cells — activating different genes. The regions of DNA that determine a cell’s unique function are opened up for easy access, while the rest remains wadded up around proteins.

Researchers from NVIDIA and Harvard University’s Department of Stem Cell and Regenerative Biology have developed a deep learning toolkit to help scientists study these accessible regions of DNA, even when sample data is noisy or limited — which is often the case in the early detection of cancer and other genetic diseases.

AtacWorks, featured today in Nature Communications, both denoises sequencing data and identifies areas with accessible DNA, and can run inference on a whole genome in just half an hour with NVIDIA Tensor Core GPUs. It’s available on NGC, NVIDIA’s hub of GPU-optimized software.

AtacWorks works with ATAC-seq, a popular method for finding open areas in the genome in both healthy and diseased cells, enabling critical insights for drug discovery.

ATAC-seq typically requires tens of thousands of cells to get a clean signal — making it very difficult to investigate rare cell types, like the stem cells that produce blood cells and platelets. By applying AtacWorks to ATAC-seq data, the same quality of results can be achieved with just tens of cells, enabling scientists to learn more about the sequences active in rare cell types, and to identify mutations that make people more vulnerable to diseases.

“With AtacWorks, we’re able to conduct single-cell experiments that would typically require 10 times as many cells,” says paper co-author Jason Buenrostro, assistant professor at Harvard and the developer of the ATAC-seq method. “Denoising low-quality sequencing coverage with GPU-accelerated deep learning has the potential to significantly advance our ability to study epigenetic changes associated with rare cell development and diseases.”

Needle in a Noisy Haystack

Buenrostro pioneered ATAC-seq in 2013 as a way to scan the epigenome to locate sites with accessible areas within a chromosome, known as chromatin. The method, popular among leading genomics research labs and pharmaceutical companies, measures the intensity of a signal at every region across the genome. Peaks in the signal correspond to areas with open DNA.

The fewer the cells available, the noisier the data appears — making it difficult to identify which areas of the DNA are accessible.

AtacWorks, a PyTorch-based convolutional neural network, was trained on labeled pairs of matching ATAC-seq datasets: one high quality and one noisy. Given a downsampled copy of the data, the model learned to predict an accurate high-quality version and identify peaks in the signal.

The researchers found that using AtacWorks, they could identify accessible chromatin in a noisy sequence of 1 million reads nearly as well as traditional methods did with a clean dataset of 50 million reads. With this capability, scientists could conduct research with a smaller number of cells, significantly reducing the cost of sample collection and sequencing.

Analysis, too, becomes faster and cheaper with AtacWorks: Running on NVIDIA Tensor Core GPUs, the model took under 30 minutes for inference on a whole genome, a process that would take 15 hours on a system with 32 CPU cores.

“With very rare cell types, it’s not possible to study differences in their DNA using existing methods,” said NVIDIA researcher Avantika Lal, lead author on the paper. “AtacWorks can help not only drive down the cost of gathering chromatin accessibility data, but also open up new possibilities in drug discovery and diagnostics.”

Enabling Insights into Disease, Drug Discovery

Looking at accessible regions of DNA could help medical researchers identify specific mutations or biomarkers that make people more vulnerable to conditions including Alzheimer’s, heart disease or cancers. This knowledge could also inform drug discovery by giving researchers a better understanding of the mechanisms of disease.

In the Nature Communications paper, the Harvard researchers applied AtacWorks to a dataset of stem cells that produce red and white blood cells — rare subtypes that couldn’t be studied with traditional methods.

With a sample set of just 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells.

Learn more about NVIDIA’s work in healthcare at the GPU Technology Conference, April 12-16. Registration is free. The healthcare track includes 16 live webinars, 18 special events, and over 100 recorded sessions, including a talk by Lal titled Deep Learning and Accelerated Computing for Epigenomic Data.

Subscribe to NVIDIA healthcare news

The DOI for this Nature Communications paper is 10.1038/s41467-021-21765-5.

The post In Genomics Breakthrough, Harvard, NVIDIA Researchers Use AI to Spot Active Areas in Cell DNA appeared first on The Official NVIDIA Blog.

Read More

Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease

Florida orange juice is getting a taste of AI.

With the Sunshine State’s $9 billion annual citrus crops plagued by a fruit-souring disease, researchers and businesses are tapping AI to help rescue the nation’s largest producer of orange juice.

University of Florida researchers are developing AI applications for agriculture. And the technology — computer vision for smart sprayers — is now being licensed and deployed in pilot tests by CCI, an agricultural equipment company.

The efforts promise to help farmers combat what’s known as “citrus greening,” the disease brought on by bacteria from the Asian citrus psyllid insect hitting farms worldwide.

Citrus greening causes patchy leaves and green fruit and can quickly decimate orchards.

The agricultural equipment supplier has seen farmers lose one-third of the orchard acreage in Florida from the onslaught of citrus greening.

“It’s having a huge impact on the state of Florida, California, Brazil, China, Mexico — the entire world is battling a citrus crisis,” said Yiannis Ampatzidis, assistant professor at UF’s Department of Agricultural and Biological Engineering.

Fertilizing Precision Agriculture

Ampatzidis works with a team of researchers focused on automation in agriculture. They develop AI applications to forecast crop yields and reduce pesticide use. The team’s image recognition models are run on the Jetson AI platform in the field for inference.

“The goal is to use Jetson Xavier to detect the size of the tree and the leaf density to instantly optimize the flow of the nozzles on sprayers for farming,” said Ampatzidis. “It also allows us to count fruit density, predict yield, and study water usage and pH levels.”

The growing popularity of organic produce and the adoption of more sustainable farming practices have drawn a field of startups plowing AI for benefits to businesses and the planet. John Deere-owned Blue River, FarmWise, SeeTree and Smart Ag are just some of the agriculture companies adopting NVIDIA GPUs for training and inference.

Like many, UF and CCI are developing applications for deployment on the NVIDIA Jetson edge AI platform. And UF has wider ambitions for fostering AI development that benefits the state

Last July, UF and NVIDIA hatched plans to build one of the world’s fastest AI supercomputers in academia, delivering 700 petaflops of processing power. Built with NVIDIA DGX systems and NVIDIA Mellanox networking, HiPerGator AI is now online to power UF’s precision agriculture research.  The new supercomputer was made possible by a $25 million donation from alumnus and NVIDIA founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

UF is a member of the NVIDIA Applied Research Accelerator Program, which supports applied research in coordination with businesses relying on NVIDIA platforms for GPU-accelerated application deployments.

Deploying Robotic Sprayers

Citrus greening has required farmers to act quickly to remove diseased trees to prevent its advances. Many orchards now have gaps in their rows of trees. As a result, conventional sprayers that apply agrochemicals uniformly along entire rows will often overspray, wasting resources and creating unnecessary environmental contamination.

UF researchers developed a sensor system of lidar and cameras for sprayers used in orchards. These sensors feed into the NVIDIA Jetson AGX Xavier, which can process split-second inference on whether the sprayer is facing a tree to spray or not, enabling autonomous spraying.

The system can adjust in real time to turn off or on the application of crop protection products or fertilizers as well as adjust the amount sprayed based on the plant’s size, said Kieth Hollingsworth, a CCI sales specialist.

“It cuts down on spraying waste overspray and on wasted material that ultimately gets washed into the groundwater. We can also predict yield based on the oranges we see on the tree,” said Hollingsworth.

Commercializing AgTech AI

CCI began working with UF eight years ago. In the past couple of years, the company has been working with the university to upgrade its infrared laser-based spraying system to one with AI.

And customers are coming to CCI for novel ways to attack the problem, said Hollingsworth.

Working with NVIDIA’s Applied Research Accelerator Program, CCI has gotten a boost with technical guidance on Jetson Xavier that has sped its development.

Citrus industry veteran Hollingsworth says AI is a useful tool in the field to wield against the crop disease that has taken some of the sweetness out of orange juice over the years.

“People have no idea how complex of a crop oranges are to grow and what it takes to produce and squeeze the juice that goes into a glass of orange juice,” said Hollingsworth.

Academic researchers can apply now for the Applied Research Accelerator Program.

Photo credit: Samuel Branch on Unsplash

The post Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease appeared first on The Official NVIDIA Blog.

Read More

What Is a Cluster? What Is a Pod?

Everything we do on the internet — which is just about everything we do these days — depends on the work of clusters, which are also called pods.

When we stream a hot new TV show, order a pair of jeans or Zoom with grandma, we use clusters. You’re reading this story thanks to pods.

So, What Is a Cluster? What Is a Pod?

A cluster or a pod is simply a set of computers linked by high-speed networks into a single unit.

Computer architects must have reached, at least unconsciously, for terms rooted in nature. Pea pods and dolphin superpods, like today’s computer clusters, show the power of many individuals working as a team.

The Roots of Pods and Superpods

The links go deeper. Botanists say pods not only protect and nourish individual peas, they can reallocate resources from damaged seeds to thriving ones. Similarly, a load balancer moves jobs off a failed compute node to a functioning one.

The dynamics aren’t much different for dolphins.

Working off the coast of the Bahamas, veteran marine biologist Denise Herzing often sees every day the same pods, family groups of perhaps 20 dolphins. And once she encountered a vastly larger group.

“Years ago, off the Baja peninsula, I saw a superpod. It was very exciting and a little overwhelming because as a researcher I want to observe a small group closely, not a thousand animals spread over a large area,” said the founder of the Wild Dolphin Project.

For dolphins, superpods are vital. “They protect the travelers by creating a huge sensory system, a thousand sets of ears that listen for predators like one super-sensor,” she said, noting the parallels with the clusters used in cloud computing today.

Warehouse-sized data center with many clusters or pods.
A data center with multiple clusters or pods can span multiple buildings, and run as a single system.

Pods Sprout in Early Data Centers

As companies began computerizing their accounting systems in the early 1960s, they instinctively ganged multiple computers together so they would have backups in case one failed, according to Greg Pfister, a former IBM technologist and an expert on clusters.

“I’m pretty sure NCR, MetLife and a lot of people did that kind of thing,” said Pfister, author of In Search of Clusters, considered by some the bible of the field.

In May 1983, Digital Equipment Corp. packed several of its popular 32-bit VAX minicomputers into what it called a VAXcluster. Each computer ran its own operating system, but they shared other resources, providing IT users with a single system image.

An early cluster diagram
Diagram of an early PC-based cluster.

By the late 1990s, the advent of low-cost PC processors, Ethernet networks and Linux inspired at least eight major research projects that built clusters. NASA designed one with 16 PC motherboards on two 10 Mbit/second networks and dubbed it Beowulf, imagining it slaying the giant mainframes and massively parallel systems of the day.

Cluster Networks Need Speed

Researchers found clusters could be assembled quickly and offered high performance at low cost, as long as they used high-speed networks to eliminate bottlenecks.

Another late ‘90’s project, Berkeley’s Network of Workstations (NoW), linked dozens of Sparc workstations on the fastest interconnects of the day. They created an image of a pod of small fish eating a larger fish to illustrate their work.

Berkeley NoW image of pod or superpod cluster
Researchers behind Berkeley’s NoW project envisioned clusters of many small systems out-performing a single larger computer.

One researcher, Eric Brewer, saw clusters were ideal for emerging internet apps, so he used the 100-server NoW system as a search engine.

“For a while we had the best search engine in the world running on the Berkeley campus,” said David Patterson, a veteran of NoW and many computer research projects at Berkeley.

The work was so successful, Brewer co-founded Inktomi, an early search engine built on a NoW-inspired cluster of 1,000 systems. It had many rivals, including a startup called Google with roots at Stanford.

“They built their network clusters out of PCs and defined a business model that let them grow and really improve search quality — the rest was history,” said Patterson, co-author of a popular textbook on computing.

Today, clusters or pods are the basis of most of the world’s TOP500 supercomputers as well as virtually all cloud computing services. And most use NVIDIA GPUs, but we’re getting ahead of the story.

Pods vs. Clusters: A War of Words

While computer architects called these systems clusters, some networking specialists preferred the term pod. Turning the biological term into a tech acronym, they said POD stood for a “point of delivery” of computing services.

The term pod gained traction in the early days of cloud computing. Service providers raced to build ever larger, warehouse-sized systems often ordering entire shipping containers, aka pods, of pre-configured systems they could plug together like Lego blocks.

First Google cluster in a container
An early prototype container delivered to a cloud service provider.

More recently, the Kubernetes group adopted the term pod. They define a software pod as “a single container or a small number of containers that are tightly coupled and that share resources.”

Industries like aerospace and consumer electronics adopted the term pod, too, perhaps to give their concepts an organic warmth. Among the most iconic examples are the iPod, the forerunner of the iPhone, and the single-astronaut vehicle from the movie 2001: A Space Odyssey.

When AI Met Clusters

In 2012, cloud computing services heard the Big Bang of AI, the genesis of a powerful new form of computing. They raced to build giant clusters of GPUs that, thanks to their internal clusters of accelerator cores, could process huge datasets to train and run neural networks.

To help spread AI to any enterprise data center, NVIDIA packs GPU clusters on InfiniBand networks into NVIDIA DGX Systems. A reference architecture lets users easily scale from a single DGX system to an NVIDIA DGX POD or even a supercomputer-class NVIDIA DGX SuperPOD.

For example, Cambridge-1, in the United Kingdom, is an AI supercomputer based on a DGX SuperPOD, dedicated to advancing life sciences and healthcare. It’s one of many AI-ready clusters and pods spreading like never before. They’re sprouting like AI itself, in many shapes and sizes in every industry and business.

The post What Is a Cluster? What Is a Pod? appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday — 21 Games Coming to GeForce NOW in March

Guess what’s back? Back again? GFN Thursday. Tell a friend.

Check out this month’s list of all the exciting new titles and classic games coming to GeForce NOW in March.

First, let’s get into what’s coming today.

Don’t Hesitate

It wouldn’t be GFN Thursday if members didn’t have new games to play. Here’s what’s new to GFN starting today:

Loop Hero on GeForce NOW

Loop Hero (day-and-date release on Steam)

Equal parts roguelike, deck-builder and auto battler, Loop Hero challenges you to think strategically as you explore each randomly generated loop path and fight to defeat The Lich. PC Gamer gave this indie high praise, saying, “don’t sleep on this brilliant roguelike.”

Disgaea PC on GeForce NOW

Disgaea PC (Steam)

The turn-based strategy RPG classic lets you amass your evil hordes and become the new Overlord. With more than 40 character types and PC-specific features, there’s never been a better time to visit the Netherworld.

Members can also look for the following games joining GeForce NOW later today:

  • Legends of Aria (Steam)
  • The Dungeon Of Naheulbeuk: The Amulet Of Chaos (Steam)
  • Wargame: Red Dragon (Free on Epic Games Store, March 4-11)
  • WRC 8 FIA World Rally Championship (Steam)

What’s Coming in March

We’ve got a great list of exciting tiles coming soon to GeForce NOW. You won’t want to miss:

Spacebase Startopia (Steam and Epic Games Store)

An original mixture of economic simulation and empire building strategy paired with classic RTS skirmishes and a good dose of humor to take the edge off.

Wrench (Steam)

Prepare and maintain race cars in an extraordinarily detailed mechanic simulator. Extreme attention has been paid to even the smallest components, including fasteners that are accurate to the thread pitch and install torque.

And that’s not all — check out even more games coming to GFN in March:

  • Door Kickers (Steam)
  • Endzone – A World Apart (Steam)
  • Monopoly Plus (Steam)
  • Monster Energy Supercross – The Official Videogame 4 (Steam)
  • Narita Boy (Steam)
  • Overcooked!: All You Can Eat (Steam)
  • Pascal’s Wager – Definitive Edition (Steam)
  • System Shock: Enhanced Edition (Steam)
  • Thief Gold (Steam)
  • Trackmania United Forever (Steam)
  • Uno (Steam)
  • Workers & Resources: Soviet Republic (Steam)
  • Worms Reloaded (Steam)

In Case You Missed It

Remember that one time in February when we told you that 30 titles were coming to GeForce NOW? Actually, it was even more than that — 18 additional games joined the service, bringing the total in February to nearly 50.

If you’re not following along every week, the additional 18 games that joined GFN are:

Add it all up, and you’ve got a lot of gaming ahead of you.

The Backbone of Your GFN iOS Experience

Backbone One, a GeForce NOW Recommended Controller

For those planning to give our new Safari iOS experience a try, our newest GeForce NOW Recommended Controller is for you.

Backbone One is an iPhone game controller recommended for GeForce NOW, with fantastic buttons and build quality, technology that preserves battery life and reduces latency, passthrough charging and more. It even has a built-in capture button to let you record and share your gameplay, right from your phone. Learn more about Backbone One on the GeForce NOW Recommended Product hub.

This should be an exciting month, GFN members. What are you going to play? Tell us on Twitter or in the comments below.

The post GFN Thursday — 21 Games Coming to GeForce NOW in March appeared first on The Official NVIDIA Blog.

Read More

NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic

Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond.

The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture for a whopping 400 petaflops of AI performance.

Located at Kao Data, a data center using 100 percent renewable energy, Cambridge-1 would rank among the world’s top three most energy-efficient supercomputers on the latest Green500 list.

Hamilton points to the concentration of leading healthcare companies in the U.K. as a primary reason for NVIDIA’s decision to build Cambridge-1.

AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore have already announced their intent to harness the supercomputer for research in the coming months.

Construction has been progressing at NVIDIA’s usual speed-of-light pace, with just final installations and initial tests remaining.

Hamilton promises to provide the latest updates on Cambridge-1 at GTC 2021.

Key Points From This Episode:

  • Hamilton gives listeners an explainer on Cambridge-1’s scalable units, or building blocks — NVIDIA DGX A100 systems — and how just 20 of them can provide the equivalent of hundreds of CPUs.
  • NVIDIA intends for Cambridge-1 to accelerate corporate research in addition to that of universities. Among them are King’s College London, which has already announced that it’ll be using the system.

Tweetables:

“With only 20 [DGX A100] servers, you can build one of the top 500 supercomputers in the world” — Marc Hamilton [9:14]

“This is the first time we’re taking an NVIDIA supercomputer by our engineers and opening it up to our partners, to our customers, to use” — Marc Hamilton [10:17]

You Might Also Like:

How AI Can Improve the Diagnosis and Treatment of Diseases

Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases.

NVIDIA Chief Scientist Bill Dally on Where AI Goes Next

This podcast is full of words from the wise. One of the pillars of the computer science world, NVIDIA’s Bill Dally joins to share his perspective on the world of deep learning and AI in general.

The Buck Starts Here: NVIDIA’s Ian Buck on What’s Next for AI

Ian Buck, general manager of accelerated computing at NVIDIA, shares his insights on how relatively unsophisticated users can harness AI through the right software. Buck helped lay the foundation for GPU computing as a Stanford doctoral candidate, and delivered the keynote address at GTC DC 2019.

The post NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic appeared first on The Official NVIDIA Blog.

Read More

Big Planet, Bigger Data: How UK Research Center Advances Environmental Science with AI

Climate change is a big problem, and big problems require big data to understand.

Few research centers take a wider lens to environmental science than the NERC Earth Observation Data Acquisition and Analysis Service (NEODAAS). Since the 1990s, the service, part of the United Kingdom’s Natural Environment Research Council and overseen by the National Centre for Earth Observation (NCEO), has made the Earth observation data collected by hundreds of satellites freely available to researchers.

Backed by NVIDIA DGX systems as part of the Massive Graphical Processing Unit Cluster for Earth Observation (MAGEO), the NEODAAS team, based at the Plymouth Marine Laboratory in the U.K., supports cutting-edge research that opens up new ways of looking at Earth observation data with deep learning.

Thanks to NVIDIA’s accelerated computing platform they’re now enabling the analysis of these troves of data faster than previously thought possible.

Earth Under Observation

More than 10TB of Earth observation data is collected daily by sensors on more than 150 satellites orbiting the planet. Processing and analyzing this requires a massive amount of compute power.

To facilitate the application of deep learning to this data and gain valuable insights into the planet’s health, NEODAAS installed MAGEO. The large accelerated computing cluster consists of five NVIDIA DGX-1 systems, interconnected with NVIDIA Mellanox InfiniBand networking, and connected to 0.5PB of dedicated storage.

MAGEO was funded through a Natural Environment Research Council (NERC) transformational capital bid in 2019 to provide NEODAAS with the capability to apply deep learning and other algorithms benefiting from a large number of NVIDIA GPU cores, to Earth observation data. The cluster is operated as a service, with researchers able to make use of the compute power and the expertise of NEODAAS staff.

“MAGEO offers an excellent opportunity to accelerate artificial intelligence and environmental intelligence research,” said Stephen Goult, a data scientist at Plymouth Marine Laboratory. “Its proximity to the NEODAAS archive allows for rapid prototyping and training using large amounts of satellite data, which will ultimately transform how we use and understand Earth observation data.”

Using NVIDIA DGX systems, the NEODAAS team can perform types of analysis that otherwise wouldn’t be feasible. It also enables the team to speed up their research dramatically — cutting training time from months to days.

NEODAAS additionally received funding to support the running of an NVIDIA Deep Learning Institute course, made available to members of the National Centre for Earth Observation in March, to foster AI development and training in the environmental and Earth observation fields.

“The course was a great success — the participants left feeling knowledgeable and enthusiastic about applying AI to their research areas,” said Goult. “Conversations held during the course have resulted in the generation of several new projects leveraging AI to solve problems in the Earth observation space.”

Transforming Chlorophyll Detection

Using MAGEO, the NEODAAS team also collaborated on new approaches that have highlighted essential insights into the nature of Earth observation data.

One such success involves developing a new chlorophyll detector to help monitor concentrations of phytoplankton in the Earth’s oceans.

Microscopic phytoplankton are a source of food for a wide variety of ocean life, sustaining everything from tiny zooplankton to gigantic blue whales. But they also serve another purpose that is beneficial to the health of the planet.

Like any plant that grows on land, they use chlorophyll to capture sunlight, which they then turn into chemical energy via photosynthesis. During photosynthesis, the phytoplankton consume carbon dioxide. The carbon byproduct of this process is carried to the bottom of the ocean when phytoplankton die or are carried to other layers when phytoplankton are consumed.

Yearly, phytoplankton transfer about 10 gigatonnes of carbon from the atmosphere to the deep ocean. With high CO2 levels being a major contributor to climate change, phytoplankton are crucial in reducing atmospheric CO2 and the effects of climate change. Even a small reduction in the growth of phytoplankton could have devastating consequences.

Using MAGEO, NEODAAS worked with scientists to develop and train a neural network that has enabled a new form of chlorophyll detector for studying the abundance of phytoplankton on a global scale. The technique uses particulate beam-attenuation coefficient data, calculated by the loss of energy from a beam of light traveling in seawater due to the presence of suspended particles.

The technique means scientists can make accurate chlorophyll measurements much cheaper and faster, using significantly more data, than with the existing lab-based approach once considered the “gold standard” of high-performance liquid chromatography.

“Thanks to the highly parallel environment and the computational performance driven by NVIDIA NVLink and the Tensor Core architecture in the NVIDIA DGX systems, what would have taken 16 months on a single GPU took 10 days on MAGEO,” said Sebastian Graban, industrial placement student at Plymouth Marine Laboratory. “The resulting trained neural network can predict chlorophyll to a very high accuracy and will provide experts with an improved, faster method of monitoring phytoplankton.”

Learn more about NVIDIA DGX systems and how GPU computing is accelerating science.

 

Feature image credit: Plymouth Marine Laboratory.

Contains modified Copernicus Sentinel data [2016]

The post Big Planet, Bigger Data: How UK Research Center Advances Environmental Science with AI appeared first on The Official NVIDIA Blog.

Read More

Meet the Maker: DIY Builder Takes AI to Bat for Calling Balls and Strikes

Baseball players have to think fast when batting against blurry-fast pitches. Now, AI might be able to assist.

Nick Bild, a Florida-based software engineer, has created an application that can signal to batters whether pitches are going to be balls or strikes. Dubbed Tipper, it can be fitted on the outer edge of glasses to show a green light for a strike or a red light for a ball.

Tipper uses image classification to alert the batter before the ball has traveled halfway to home plate. It relies on the NVIDIA Jetson edge AI platform for split-second inference, which triggers the lights.

He figures his application could be used to help as a training aid for batters to help recognize good pitches from bad. Pitchers also could use it to analyze whether any body language tips off batters on their delivery.

“Who knows, maybe umpires could rely on it. For those close calls, it might help to reduce arguments with coaches as well as the ire of fans,” said Bild.

About the Maker

Bild works in the telecom industry by day. By night, he turns his living room into a laboratory for Jetson experiments.

And Bild certainly knows how to have fun. And we’re not just talking about his living room-turned-batting cage. Self-taught on machine learning, Bild has applied his ML and Python chops to Jetson AGX Xavier for projects like ShAIdes, enabling gestures to turn on home lights.

Bild says machine learning is particularly useful to solve problems that are otherwise unapproachable. And for a hobbyist, he says, the cost of entry can also be prohibitively high.

His Inspiration

When Bild first heard about Jetson Nano, he saw it as a tool to bring his ideas to life on a small budget. He bought one the day it was first released and has been building devices with it ever since.

The first Jetson project he created was called DOOM Air. He learned image classification basics and put that to work to operate a computer that was projecting the blockbuster video game DOOM onto the wall, controlling the game with his body movements.

Jetson’s ease of use enabled early successes for Bild, encouraging him to take on more difficult projects, he says.

“The knowledge I picked up from building these projects gave me the basic skills I needed for a more elaborate build like Tipper,” he said.

His Favorite Jetson Projects

Bild likes many of his Jetson projects. His Deep Clean project is one favorite. It uses AI to track the places in a room touched by a person so that it can be sanitized.

But Tipper is Bild’s favorite Jetson project of all. Its pitch predictions are aided by a camera that can capture 100 frames per second. Facing the camera at the ball launcher — a Nerf gun —  it can capture two successive images of the ball early in flight.

Tipper was trained on “hundreds of images” of balls and strikes, he said. The result is that Jetson AGX Xavier classifies balls in the air to guide batters better than a first base coach.

As far as fun DIY AI, this one is a home run.

The post Meet the Maker: DIY Builder Takes AI to Bat for Calling Balls and Strikes appeared first on The Official NVIDIA Blog.

Read More