In Genomics Breakthrough, Harvard, NVIDIA Researchers Use AI to Spot Active Areas in Cell DNA

Like a traveler who overpacks a suitcase with a closet’s worth of clothes, most cells in the body carry around a complete copy of a person’s DNA, with billions of base pairs crammed into the nucleus.

But an individual cell pulls out only the subsection of genetic apparel that it needs to function, with each cell type — such as liver, blood or skin cells — activating different genes. The regions of DNA that determine a cell’s unique function are opened up for easy access, while the rest remains wadded up around proteins.

Researchers from NVIDIA and Harvard University’s Department of Stem Cell and Regenerative Biology have developed a deep learning toolkit to help scientists study these accessible regions of DNA, even when sample data is noisy or limited — which is often the case in the early detection of cancer and other genetic diseases.

AtacWorks, featured today in Nature Communications, both denoises sequencing data and identifies areas with accessible DNA, and can run inference on a whole genome in just half an hour with NVIDIA Tensor Core GPUs. It’s available on NGC, NVIDIA’s hub of GPU-optimized software.

AtacWorks works with ATAC-seq, a popular method for finding open areas in the genome in both healthy and diseased cells, enabling critical insights for drug discovery.

ATAC-seq typically requires tens of thousands of cells to get a clean signal — making it very difficult to investigate rare cell types, like the stem cells that produce blood cells and platelets. By applying AtacWorks to ATAC-seq data, the same quality of results can be achieved with just tens of cells, enabling scientists to learn more about the sequences active in rare cell types, and to identify mutations that make people more vulnerable to diseases.

“With AtacWorks, we’re able to conduct single-cell experiments that would typically require 10 times as many cells,” says paper co-author Jason Buenrostro, assistant professor at Harvard and the developer of the ATAC-seq method. “Denoising low-quality sequencing coverage with GPU-accelerated deep learning has the potential to significantly advance our ability to study epigenetic changes associated with rare cell development and diseases.”

Needle in a Noisy Haystack

Buenrostro pioneered ATAC-seq in 2013 as a way to scan the epigenome to locate sites with accessible areas within a chromosome, known as chromatin. The method, popular among leading genomics research labs and pharmaceutical companies, measures the intensity of a signal at every region across the genome. Peaks in the signal correspond to areas with open DNA.

The fewer the cells available, the noisier the data appears — making it difficult to identify which areas of the DNA are accessible.

AtacWorks, a PyTorch-based convolutional neural network, was trained on labeled pairs of matching ATAC-seq datasets: one high quality and one noisy. Given a downsampled copy of the data, the model learned to predict an accurate high-quality version and identify peaks in the signal.

The researchers found that using AtacWorks, they could identify accessible chromatin in a noisy sequence of 1 million reads nearly as well as traditional methods did with a clean dataset of 50 million reads. With this capability, scientists could conduct research with a smaller number of cells, significantly reducing the cost of sample collection and sequencing.

Analysis, too, becomes faster and cheaper with AtacWorks: Running on NVIDIA Tensor Core GPUs, the model took under 30 minutes for inference on a whole genome, a process that would take 15 hours on a system with 32 CPU cores.

“With very rare cell types, it’s not possible to study differences in their DNA using existing methods,” said NVIDIA researcher Avantika Lal, lead author on the paper. “AtacWorks can help not only drive down the cost of gathering chromatin accessibility data, but also open up new possibilities in drug discovery and diagnostics.”

Enabling Insights into Disease, Drug Discovery

Looking at accessible regions of DNA could help medical researchers identify specific mutations or biomarkers that make people more vulnerable to conditions including Alzheimer’s, heart disease or cancers. This knowledge could also inform drug discovery by giving researchers a better understanding of the mechanisms of disease.

In the Nature Communications paper, the Harvard researchers applied AtacWorks to a dataset of stem cells that produce red and white blood cells — rare subtypes that couldn’t be studied with traditional methods.

With a sample set of just 50 cells, the team was able to use AtacWorks to identify distinct regions of DNA associated with cells that develop into white blood cells, and separate sequences that correlate with red blood cells.

Learn more about NVIDIA’s work in healthcare at the GPU Technology Conference, April 12-16. Registration is free. The healthcare track includes 16 live webinars, 18 special events, and over 100 recorded sessions, including a talk by Lal titled Deep Learning and Accelerated Computing for Epigenomic Data.

Subscribe to NVIDIA healthcare news

The DOI for this Nature Communications paper is 10.1038/s41467-021-21765-5.

The post In Genomics Breakthrough, Harvard, NVIDIA Researchers Use AI to Spot Active Areas in Cell DNA appeared first on The Official NVIDIA Blog.

Read More

Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease

Florida orange juice is getting a taste of AI.

With the Sunshine State’s $9 billion annual citrus crops plagued by a fruit-souring disease, researchers and businesses are tapping AI to help rescue the nation’s largest producer of orange juice.

University of Florida researchers are developing AI applications for agriculture. And the technology — computer vision for smart sprayers — is now being licensed and deployed in pilot tests by CCI, an agricultural equipment company.

The efforts promise to help farmers combat what’s known as “citrus greening,” the disease brought on by bacteria from the Asian citrus psyllid insect hitting farms worldwide.

Citrus greening causes patchy leaves and green fruit and can quickly decimate orchards.

The agricultural equipment supplier has seen farmers lose one-third of the orchard acreage in Florida from the onslaught of citrus greening.

“It’s having a huge impact on the state of Florida, California, Brazil, China, Mexico — the entire world is battling a citrus crisis,” said Yiannis Ampatzidis, assistant professor at UF’s Department of Agricultural and Biological Engineering.

Fertilizing Precision Agriculture

Ampatzidis works with a team of researchers focused on automation in agriculture. They develop AI applications to forecast crop yields and reduce pesticide use. The team’s image recognition models are run on the Jetson AI platform in the field for inference.

“The goal is to use Jetson Xavier to detect the size of the tree and the leaf density to instantly optimize the flow of the nozzles on sprayers for farming,” said Ampatzidis. “It also allows us to count fruit density, predict yield, and study water usage and pH levels.”

The growing popularity of organic produce and the adoption of more sustainable farming practices have drawn a field of startups plowing AI for benefits to businesses and the planet. John Deere-owned Blue River, FarmWise, SeeTree and Smart Ag are just some of the agriculture companies adopting NVIDIA GPUs for training and inference.

Like many, UF and CCI are developing applications for deployment on the NVIDIA Jetson edge AI platform. And UF has wider ambitions for fostering AI development that benefits the state

Last July, UF and NVIDIA hatched plans to build one of the world’s fastest AI supercomputers in academia, delivering 700 petaflops of processing power. Built with NVIDIA DGX systems and NVIDIA Mellanox networking, HiPerGator AI is now online to power UF’s precision agriculture research.  The new supercomputer was made possible by a $25 million donation from alumnus and NVIDIA founder Chris Malachowsky and $25 million in hardware, software, training and services from NVIDIA.

UF is a member of the NVIDIA Applied Research Accelerator Program, which supports applied research in coordination with businesses relying on NVIDIA platforms for GPU-accelerated application deployments.

Deploying Robotic Sprayers

Citrus greening has required farmers to act quickly to remove diseased trees to prevent its advances. Many orchards now have gaps in their rows of trees. As a result, conventional sprayers that apply agrochemicals uniformly along entire rows will often overspray, wasting resources and creating unnecessary environmental contamination.

UF researchers developed a sensor system of lidar and cameras for sprayers used in orchards. These sensors feed into the NVIDIA Jetson AGX Xavier, which can process split-second inference on whether the sprayer is facing a tree to spray or not, enabling autonomous spraying.

The system can adjust in real time to turn off or on the application of crop protection products or fertilizers as well as adjust the amount sprayed based on the plant’s size, said Kieth Hollingsworth, a CCI sales specialist.

“It cuts down on spraying waste overspray and on wasted material that ultimately gets washed into the groundwater. We can also predict yield based on the oranges we see on the tree,” said Hollingsworth.

Commercializing AgTech AI

CCI began working with UF eight years ago. In the past couple of years, the company has been working with the university to upgrade its infrared laser-based spraying system to one with AI.

And customers are coming to CCI for novel ways to attack the problem, said Hollingsworth.

Working with NVIDIA’s Applied Research Accelerator Program, CCI has gotten a boost with technical guidance on Jetson Xavier that has sped its development.

Citrus industry veteran Hollingsworth says AI is a useful tool in the field to wield against the crop disease that has taken some of the sweetness out of orange juice over the years.

“People have no idea how complex of a crop oranges are to grow and what it takes to produce and squeeze the juice that goes into a glass of orange juice,” said Hollingsworth.

Academic researchers can apply now for the Applied Research Accelerator Program.

Photo credit: Samuel Branch on Unsplash

The post Juicing AI: University of Florida Taps Computer Vision to Combat Citrus Disease appeared first on The Official NVIDIA Blog.

Read More

What Is a Cluster? What Is a Pod?

Everything we do on the internet — which is just about everything we do these days — depends on the work of clusters, which are also called pods.

When we stream a hot new TV show, order a pair of jeans or Zoom with grandma, we use clusters. You’re reading this story thanks to pods.

So, What Is a Cluster? What Is a Pod?

A cluster or a pod is simply a set of computers linked by high-speed networks into a single unit.

Computer architects must have reached, at least unconsciously, for terms rooted in nature. Pea pods and dolphin superpods, like today’s computer clusters, show the power of many individuals working as a team.

The Roots of Pods and Superpods

The links go deeper. Botanists say pods not only protect and nourish individual peas, they can reallocate resources from damaged seeds to thriving ones. Similarly, a load balancer moves jobs off a failed compute node to a functioning one.

The dynamics aren’t much different for dolphins.

Working off the coast of the Bahamas, veteran marine biologist Denise Herzing often sees every day the same pods, family groups of perhaps 20 dolphins. And once she encountered a vastly larger group.

“Years ago, off the Baja peninsula, I saw a superpod. It was very exciting and a little overwhelming because as a researcher I want to observe a small group closely, not a thousand animals spread over a large area,” said the founder of the Wild Dolphin Project.

For dolphins, superpods are vital. “They protect the travelers by creating a huge sensory system, a thousand sets of ears that listen for predators like one super-sensor,” she said, noting the parallels with the clusters used in cloud computing today.

Warehouse-sized data center with many clusters or pods.
A data center with multiple clusters or pods can span multiple buildings, and run as a single system.

Pods Sprout in Early Data Centers

As companies began computerizing their accounting systems in the early 1960s, they instinctively ganged multiple computers together so they would have backups in case one failed, according to Greg Pfister, a former IBM technologist and an expert on clusters.

“I’m pretty sure NCR, MetLife and a lot of people did that kind of thing,” said Pfister, author of In Search of Clusters, considered by some the bible of the field.

In May 1983, Digital Equipment Corp. packed several of its popular 32-bit VAX minicomputers into what it called a VAXcluster. Each computer ran its own operating system, but they shared other resources, providing IT users with a single system image.

An early cluster diagram
Diagram of an early PC-based cluster.

By the late 1990s, the advent of low-cost PC processors, Ethernet networks and Linux inspired at least eight major research projects that built clusters. NASA designed one with 16 PC motherboards on two 10 Mbit/second networks and dubbed it Beowulf, imagining it slaying the giant mainframes and massively parallel systems of the day.

Cluster Networks Need Speed

Researchers found clusters could be assembled quickly and offered high performance at low cost, as long as they used high-speed networks to eliminate bottlenecks.

Another late ‘90’s project, Berkeley’s Network of Workstations (NoW), linked dozens of Sparc workstations on the fastest interconnects of the day. They created an image of a pod of small fish eating a larger fish to illustrate their work.

Berkeley NoW image of pod or superpod cluster
Researchers behind Berkeley’s NoW project envisioned clusters of many small systems out-performing a single larger computer.

One researcher, Eric Brewer, saw clusters were ideal for emerging internet apps, so he used the 100-server NoW system as a search engine.

“For a while we had the best search engine in the world running on the Berkeley campus,” said David Patterson, a veteran of NoW and many computer research projects at Berkeley.

The work was so successful, Brewer co-founded Inktomi, an early search engine built on a NoW-inspired cluster of 1,000 systems. It had many rivals, including a startup called Google with roots at Stanford.

“They built their network clusters out of PCs and defined a business model that let them grow and really improve search quality — the rest was history,” said Patterson, co-author of a popular textbook on computing.

Today, clusters or pods are the basis of most of the world’s TOP500 supercomputers as well as virtually all cloud computing services. And most use NVIDIA GPUs, but we’re getting ahead of the story.

Pods vs. Clusters: A War of Words

While computer architects called these systems clusters, some networking specialists preferred the term pod. Turning the biological term into a tech acronym, they said POD stood for a “point of delivery” of computing services.

The term pod gained traction in the early days of cloud computing. Service providers raced to build ever larger, warehouse-sized systems often ordering entire shipping containers, aka pods, of pre-configured systems they could plug together like Lego blocks.

First Google cluster in a container
An early prototype container delivered to a cloud service provider.

More recently, the Kubernetes group adopted the term pod. They define a software pod as “a single container or a small number of containers that are tightly coupled and that share resources.”

Industries like aerospace and consumer electronics adopted the term pod, too, perhaps to give their concepts an organic warmth. Among the most iconic examples are the iPod, the forerunner of the iPhone, and the single-astronaut vehicle from the movie 2001: A Space Odyssey.

When AI Met Clusters

In 2012, cloud computing services heard the Big Bang of AI, the genesis of a powerful new form of computing. They raced to build giant clusters of GPUs that, thanks to their internal clusters of accelerator cores, could process huge datasets to train and run neural networks.

To help spread AI to any enterprise data center, NVIDIA packs GPU clusters on InfiniBand networks into NVIDIA DGX Systems. A reference architecture lets users easily scale from a single DGX system to an NVIDIA DGX POD or even a supercomputer-class NVIDIA DGX SuperPOD.

For example, Cambridge-1, in the United Kingdom, is an AI supercomputer based on a DGX SuperPOD, dedicated to advancing life sciences and healthcare. It’s one of many AI-ready clusters and pods spreading like never before. They’re sprouting like AI itself, in many shapes and sizes in every industry and business.

The post What Is a Cluster? What Is a Pod? appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday — 21 Games Coming to GeForce NOW in March

Guess what’s back? Back again? GFN Thursday. Tell a friend.

Check out this month’s list of all the exciting new titles and classic games coming to GeForce NOW in March.

First, let’s get into what’s coming today.

Don’t Hesitate

It wouldn’t be GFN Thursday if members didn’t have new games to play. Here’s what’s new to GFN starting today:

Loop Hero on GeForce NOW

Loop Hero (day-and-date release on Steam)

Equal parts roguelike, deck-builder and auto battler, Loop Hero challenges you to think strategically as you explore each randomly generated loop path and fight to defeat The Lich. PC Gamer gave this indie high praise, saying, “don’t sleep on this brilliant roguelike.”

Disgaea PC on GeForce NOW

Disgaea PC (Steam)

The turn-based strategy RPG classic lets you amass your evil hordes and become the new Overlord. With more than 40 character types and PC-specific features, there’s never been a better time to visit the Netherworld.

Members can also look for the following games joining GeForce NOW later today:

  • Legends of Aria (Steam)
  • The Dungeon Of Naheulbeuk: The Amulet Of Chaos (Steam)
  • Wargame: Red Dragon (Free on Epic Games Store, March 4-11)
  • WRC 8 FIA World Rally Championship (Steam)

What’s Coming in March

We’ve got a great list of exciting tiles coming soon to GeForce NOW. You won’t want to miss:

Spacebase Startopia (Steam and Epic Games Store)

An original mixture of economic simulation and empire building strategy paired with classic RTS skirmishes and a good dose of humor to take the edge off.

Wrench (Steam)

Prepare and maintain race cars in an extraordinarily detailed mechanic simulator. Extreme attention has been paid to even the smallest components, including fasteners that are accurate to the thread pitch and install torque.

And that’s not all — check out even more games coming to GFN in March:

  • Door Kickers (Steam)
  • Endzone – A World Apart (Steam)
  • Monopoly Plus (Steam)
  • Monster Energy Supercross – The Official Videogame 4 (Steam)
  • Narita Boy (Steam)
  • Overcooked!: All You Can Eat (Steam)
  • Pascal’s Wager – Definitive Edition (Steam)
  • System Shock: Enhanced Edition (Steam)
  • Thief Gold (Steam)
  • Trackmania United Forever (Steam)
  • Uno (Steam)
  • Workers & Resources: Soviet Republic (Steam)
  • Worms Reloaded (Steam)

In Case You Missed It

Remember that one time in February when we told you that 30 titles were coming to GeForce NOW? Actually, it was even more than that — 18 additional games joined the service, bringing the total in February to nearly 50.

If you’re not following along every week, the additional 18 games that joined GFN are:

Add it all up, and you’ve got a lot of gaming ahead of you.

The Backbone of Your GFN iOS Experience

Backbone One, a GeForce NOW Recommended Controller

For those planning to give our new Safari iOS experience a try, our newest GeForce NOW Recommended Controller is for you.

Backbone One is an iPhone game controller recommended for GeForce NOW, with fantastic buttons and build quality, technology that preserves battery life and reduces latency, passthrough charging and more. It even has a built-in capture button to let you record and share your gameplay, right from your phone. Learn more about Backbone One on the GeForce NOW Recommended Product hub.

This should be an exciting month, GFN members. What are you going to play? Tell us on Twitter or in the comments below.

The post GFN Thursday — 21 Games Coming to GeForce NOW in March appeared first on The Official NVIDIA Blog.

Read More

NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic

Since NVIDIA announced construction of the U.K.’s most powerful AI supercomputer — Cambridge-1 — Marc Hamilton, vice president of solutions architecture and engineering, has been (remotely) overseeing its building across the pond.

The system, which will be available for U.K. healthcare researchers to work on pressing problems, is being built on NVIDIA DGX SuperPOD architecture for a whopping 400 petaflops of AI performance.

Located at Kao Data, a data center using 100 percent renewable energy, Cambridge-1 would rank among the world’s top three most energy-efficient supercomputers on the latest Green500 list.

Hamilton points to the concentration of leading healthcare companies in the U.K. as a primary reason for NVIDIA’s decision to build Cambridge-1.

AstraZeneca, GSK, Guy’s and St Thomas’ NHS Foundation Trust, King’s College London, and Oxford Nanopore have already announced their intent to harness the supercomputer for research in the coming months.

Construction has been progressing at NVIDIA’s usual speed-of-light pace, with just final installations and initial tests remaining.

Hamilton promises to provide the latest updates on Cambridge-1 at GTC 2021.

Key Points From This Episode:

  • Hamilton gives listeners an explainer on Cambridge-1’s scalable units, or building blocks — NVIDIA DGX A100 systems — and how just 20 of them can provide the equivalent of hundreds of CPUs.
  • NVIDIA intends for Cambridge-1 to accelerate corporate research in addition to that of universities. Among them are King’s College London, which has already announced that it’ll be using the system.

Tweetables:

“With only 20 [DGX A100] servers, you can build one of the top 500 supercomputers in the world” — Marc Hamilton [9:14]

“This is the first time we’re taking an NVIDIA supercomputer by our engineers and opening it up to our partners, to our customers, to use” — Marc Hamilton [10:17]

You Might Also Like:

How AI Can Improve the Diagnosis and Treatment of Diseases

Medicine — particularly radiology and pathology — have become more data-driven. The Massachusetts General Hospital Center for Clinical Data Science — led by Mark Michalski — promises to accelerate that, using AI technologies to spot patterns that can improve the detection, diagnosis and treatment of diseases.

NVIDIA Chief Scientist Bill Dally on Where AI Goes Next

This podcast is full of words from the wise. One of the pillars of the computer science world, NVIDIA’s Bill Dally joins to share his perspective on the world of deep learning and AI in general.

The Buck Starts Here: NVIDIA’s Ian Buck on What’s Next for AI

Ian Buck, general manager of accelerated computing at NVIDIA, shares his insights on how relatively unsophisticated users can harness AI through the right software. Buck helped lay the foundation for GPU computing as a Stanford doctoral candidate, and delivered the keynote address at GTC DC 2019.

The post NVIDIA’s Marc Hamilton on Building Cambridge-1 Supercomputer During Pandemic appeared first on The Official NVIDIA Blog.

Read More

Big Planet, Bigger Data: How UK Research Center Advances Environmental Science with AI

Climate change is a big problem, and big problems require big data to understand.

Few research centers take a wider lens to environmental science than the NERC Earth Observation Data Acquisition and Analysis Service (NEODAAS). Since the 1990s, the service, part of the United Kingdom’s Natural Environment Research Council and overseen by the National Centre for Earth Observation (NCEO), has made the Earth observation data collected by hundreds of satellites freely available to researchers.

Backed by NVIDIA DGX systems as part of the Massive Graphical Processing Unit Cluster for Earth Observation (MAGEO), the NEODAAS team, based at the Plymouth Marine Laboratory in the U.K., supports cutting-edge research that opens up new ways of looking at Earth observation data with deep learning.

Thanks to NVIDIA’s accelerated computing platform they’re now enabling the analysis of these troves of data faster than previously thought possible.

Earth Under Observation

More than 10TB of Earth observation data is collected daily by sensors on more than 150 satellites orbiting the planet. Processing and analyzing this requires a massive amount of compute power.

To facilitate the application of deep learning to this data and gain valuable insights into the planet’s health, NEODAAS installed MAGEO. The large accelerated computing cluster consists of five NVIDIA DGX-1 systems, interconnected with NVIDIA Mellanox InfiniBand networking, and connected to 0.5PB of dedicated storage.

MAGEO was funded through a Natural Environment Research Council (NERC) transformational capital bid in 2019 to provide NEODAAS with the capability to apply deep learning and other algorithms benefiting from a large number of NVIDIA GPU cores, to Earth observation data. The cluster is operated as a service, with researchers able to make use of the compute power and the expertise of NEODAAS staff.

“MAGEO offers an excellent opportunity to accelerate artificial intelligence and environmental intelligence research,” said Stephen Goult, a data scientist at Plymouth Marine Laboratory. “Its proximity to the NEODAAS archive allows for rapid prototyping and training using large amounts of satellite data, which will ultimately transform how we use and understand Earth observation data.”

Using NVIDIA DGX systems, the NEODAAS team can perform types of analysis that otherwise wouldn’t be feasible. It also enables the team to speed up their research dramatically — cutting training time from months to days.

NEODAAS additionally received funding to support the running of an NVIDIA Deep Learning Institute course, made available to members of the National Centre for Earth Observation in March, to foster AI development and training in the environmental and Earth observation fields.

“The course was a great success — the participants left feeling knowledgeable and enthusiastic about applying AI to their research areas,” said Goult. “Conversations held during the course have resulted in the generation of several new projects leveraging AI to solve problems in the Earth observation space.”

Transforming Chlorophyll Detection

Using MAGEO, the NEODAAS team also collaborated on new approaches that have highlighted essential insights into the nature of Earth observation data.

One such success involves developing a new chlorophyll detector to help monitor concentrations of phytoplankton in the Earth’s oceans.

Microscopic phytoplankton are a source of food for a wide variety of ocean life, sustaining everything from tiny zooplankton to gigantic blue whales. But they also serve another purpose that is beneficial to the health of the planet.

Like any plant that grows on land, they use chlorophyll to capture sunlight, which they then turn into chemical energy via photosynthesis. During photosynthesis, the phytoplankton consume carbon dioxide. The carbon byproduct of this process is carried to the bottom of the ocean when phytoplankton die or are carried to other layers when phytoplankton are consumed.

Yearly, phytoplankton transfer about 10 gigatonnes of carbon from the atmosphere to the deep ocean. With high CO2 levels being a major contributor to climate change, phytoplankton are crucial in reducing atmospheric CO2 and the effects of climate change. Even a small reduction in the growth of phytoplankton could have devastating consequences.

Using MAGEO, NEODAAS worked with scientists to develop and train a neural network that has enabled a new form of chlorophyll detector for studying the abundance of phytoplankton on a global scale. The technique uses particulate beam-attenuation coefficient data, calculated by the loss of energy from a beam of light traveling in seawater due to the presence of suspended particles.

The technique means scientists can make accurate chlorophyll measurements much cheaper and faster, using significantly more data, than with the existing lab-based approach once considered the “gold standard” of high-performance liquid chromatography.

“Thanks to the highly parallel environment and the computational performance driven by NVIDIA NVLink and the Tensor Core architecture in the NVIDIA DGX systems, what would have taken 16 months on a single GPU took 10 days on MAGEO,” said Sebastian Graban, industrial placement student at Plymouth Marine Laboratory. “The resulting trained neural network can predict chlorophyll to a very high accuracy and will provide experts with an improved, faster method of monitoring phytoplankton.”

Learn more about NVIDIA DGX systems and how GPU computing is accelerating science.

 

Feature image credit: Plymouth Marine Laboratory.

Contains modified Copernicus Sentinel data [2016]

The post Big Planet, Bigger Data: How UK Research Center Advances Environmental Science with AI appeared first on The Official NVIDIA Blog.

Read More

Meet the Maker: DIY Builder Takes AI to Bat for Calling Balls and Strikes

Baseball players have to think fast when batting against blurry-fast pitches. Now, AI might be able to assist.

Nick Bild, a Florida-based software engineer, has created an application that can signal to batters whether pitches are going to be balls or strikes. Dubbed Tipper, it can be fitted on the outer edge of glasses to show a green light for a strike or a red light for a ball.

Tipper uses image classification to alert the batter before the ball has traveled halfway to home plate. It relies on the NVIDIA Jetson edge AI platform for split-second inference, which triggers the lights.

He figures his application could be used to help as a training aid for batters to help recognize good pitches from bad. Pitchers also could use it to analyze whether any body language tips off batters on their delivery.

“Who knows, maybe umpires could rely on it. For those close calls, it might help to reduce arguments with coaches as well as the ire of fans,” said Bild.

About the Maker

Bild works in the telecom industry by day. By night, he turns his living room into a laboratory for Jetson experiments.

And Bild certainly knows how to have fun. And we’re not just talking about his living room-turned-batting cage. Self-taught on machine learning, Bild has applied his ML and Python chops to Jetson AGX Xavier for projects like ShAIdes, enabling gestures to turn on home lights.

Bild says machine learning is particularly useful to solve problems that are otherwise unapproachable. And for a hobbyist, he says, the cost of entry can also be prohibitively high.

His Inspiration

When Bild first heard about Jetson Nano, he saw it as a tool to bring his ideas to life on a small budget. He bought one the day it was first released and has been building devices with it ever since.

The first Jetson project he created was called DOOM Air. He learned image classification basics and put that to work to operate a computer that was projecting the blockbuster video game DOOM onto the wall, controlling the game with his body movements.

Jetson’s ease of use enabled early successes for Bild, encouraging him to take on more difficult projects, he says.

“The knowledge I picked up from building these projects gave me the basic skills I needed for a more elaborate build like Tipper,” he said.

His Favorite Jetson Projects

Bild likes many of his Jetson projects. His Deep Clean project is one favorite. It uses AI to track the places in a room touched by a person so that it can be sanitized.

But Tipper is Bild’s favorite Jetson project of all. Its pitch predictions are aided by a camera that can capture 100 frames per second. Facing the camera at the ball launcher — a Nerf gun —  it can capture two successive images of the ball early in flight.

Tipper was trained on “hundreds of images” of balls and strikes, he said. The result is that Jetson AGX Xavier classifies balls in the air to guide batters better than a first base coach.

As far as fun DIY AI, this one is a home run.

The post Meet the Maker: DIY Builder Takes AI to Bat for Calling Balls and Strikes appeared first on The Official NVIDIA Blog.

Read More

What Is Cloud Gaming?

Cloud gaming uses powerful, industrial-strength GPUs inside secure data centers to stream your favorite games over the internet to you. So you can play the latest games on nearly any device, even ones that can’t normally play that game.

But First, What Is Cloud Gaming?

While the technology is complex, the concept is simple.

Cloud gaming takes your favorite game, and instead of using the device in front of you to power it, a server — a powerful, industrial-strength PC — runs the game from a secure data center.

Gameplay is then streamed over the internet back to you, allowing you to play the latest games on nearly any device, even ones that are not capable of running can’t actually play that game.

Cloud gaming streams the latest games from powerful GPUs in remote data centers to nearly any device.

Video games are interactive, obviously. So, cloud gaming servers need to process information and render frames in real time. Unlike movies or TV shows that can provide a buffer — a few extra seconds of information that gets sent to your device before it’s time to be displayed — games are dependent on the user’s next keystroke or button press.

Introducing GeForce NOW

We started our journey to cloud gaming over 10 years ago, spending that time to optimize every millisecond of the pipeline that we manage, from the graphics cards in the data centers to the software on your local device.

Here’s how it works.

GeForce NOW is a service that takes a GeForce gaming PC’s power and flexibility and makes it accessible through the cloud. This gives you an always-on gaming rig that never needs upgrading, patching or updating — across all of your devices.

One of the things that makes GeForce NOW unique is that it connects to popular PC games stores — Steam, Epic Games Store, Ubisoft Connect and more — so gamers can play the same PC version of games their friends are playing.

It also means, if they already own a bunch of games, they can log in and start playing them. And if they have, or upgrade to, a gaming rig, they have access to download and play those games on that local PC.

GeForce NOW empowers you to take your PC games with you, wherever you go.

Gamers get an immersive PC gaming experience, instant access to the world’s most popular games and gaming communities, and the freedom to play on any device, at any time.

It’s PC gaming for those whose PCs have integrated graphics, for Macs and Chromebooks that don’t have access to the latest games, or for internet-connected mobile devices where PC gaming is only a dream.

Over 80 percent of GeForce NOW members are playing on devices that don’t meet the min spec for the games they’re playing.

To start, sign up for the service, download the app and begin your cloud gaming journey.

Powering PC Gaming from the Cloud

Cloud data centers with NVIDIA GPUs power the world’s most computationally complex tasks, from AI to data analytics and research. Combined with advanced GeForce PC gaming technologies, GeForce NOW delivers high-end PC gaming to passionate gamers.

NVIDIA RTX servers provide the backbone for GeForce NOW.

GeForce NOW data centers include NVIDIA RTX servers that feature RTX GPUs. These GPUs enable the holy grail of modern graphics: real-time ray tracing, and DLSS, NVIDIA’s groundbreaking AI rendering that boosts frame rates for uncompromised image quality. The hardware is supported with NVIDIA Game Ready Driver performance improvements.

Patented encoding technology — along with hardware acceleration in both video encoding and decoding, pioneered by NVIDIA more than a decade ago — allows for gameplay to be streamed at high frame rates, with low enough latency that most games will feel like the game is being played locally. Gameplay rendered in GeForce NOW data centers is converted into high-definition H.265 and H.264 video and streamed back to the gamer instantaneously.

The total time it takes from button press or keystroke to the action appearing on the screen is less than one-tenth of a second, faster than the blink of an eye.

Growing Cloud Gaming Around the World

With the ambition to deliver quality cloud gaming to all gamers, NVIDIA works with partners around the world including telecommunications and service providers to put GeForce NOW servers to work in their own data centers, ensuring lightning-fast connections.

Partners that have already deployed RTX cloud gaming servers include SoftBank and KDDI in Japan, LG Uplus in Korea, GFN.RU in Russia, Armenia, Azerbaijan, Belarus, Kazakhstan, Georgia, Moldova, Ukraine and Uzbekistan, Zain in Saudi Arabia and Taiwan Mobile in Taiwan.

Together with partners from around the globe, we’re scaling GeForce NOW to enable millions of gamers to play their favorite games, when and where they want.

Get started with your gaming adventures on GeForce NOW.

Editor’s note: This is the first in a series on the GeForce NOW game-streaming service, how it works, ways you can make the most of it, and where it’s going next. 

In our next blog, we’ll talk about how we bring your games to GeForce NOW.

Follow GeForce NOW on Facebook and Twitter and stay up to date on the latest features and game launches. 

The post What Is Cloud Gaming? appeared first on The Official NVIDIA Blog.

Read More

In the Drink of an AI: Startup Opseyes Instantly Analyzes Wastewater

Let’s be blunt. Potentially toxic waste is just about the last thing you want to get in the mail. And that’s just one of the opportunities for AI to make the business of analyzing wastewater better.

It’s an industry that goes far beyond just making sure water coming from traditional sewage plants is clean.

Just about every industry on earth — from computer chips to potato chips — relies on putting water to work, which means we’re all, literally, swimming in the stuff.

Just What the Doctor Ordered

That started to change, however, thanks to a conversation Opseyes founder Bryan Arndt, then a managing consultant with Denmark-based architecture and engineering firm Ramboll, had with his brother, a radiologist.

Arndt was intrigued when his brother described how deep learning was being set loose on medical images.

Arndt quickly realized that the same technology — deep learning — that helps radiologists analyze images of the human body faster and more accurately could almost instantly analyze images, taken through microscopes, of wastewater samples.

Faster Flow

The result, developed by Arndt and his colleagues at Ramboll, a wastewater industry leader for more than 50 years, dramatically speeds up an industry that’s long relied on sending tightly sealed samples of some of the stinkiest stuff on earth through the mail.

That’s critical when cities and towns and industries of all kinds are constantly taking water from lakes and rivers, like the Mississippi, treating it, and returning it to nature.

“We had one client find out their discharge was a quarter-mile, at best, from the intake for the next city’s water supply,” Arndt says. “Someone is always drinking what your tube is putting out.”

That makes wastewater enormously important.

Water, Water, Everywhere

It’s an industry that was kicked off by the 1972 U.S. Clean Water Act, a landmark not just in the United States, but globally.

Thanks to growing awareness of the importance of clean water, analysts estimate the global wastewater treatment market will be worth more than $210 billion by 2025.

The challenge: while almost every industry creates wastewater, wastewater expertise isn’t exactly ubiquitous.

Experts who can peer through a microscope and identify, say, the six most common bacterial “filaments” as they’re known in the industry, or critters such as tardigrades, are scarce.

You’ve Got … Ugh

That means samples of wastewater, or soil containing that water, have to be sent through the mail to get to these experts, who often have a backlog of samples to go through.

While Ardnt says people in his industry take precautions to seal potentially toxic waste and track it to ensure it gets to the right place, it’s still time-consuming.

The solution, Arndt realized, was to use deep learning to train an AI that could yield instantaneous results. To do this, last year Arndt reached out on social media to colleagues throughout the wastewater industry to send him samples.

Least Sexy Photoshoot Ever

He and his small team then spent months creating more than 6,000 images of these samples in Ramboll’s U.S. labs, where they build elaborate models of wastewater systems before deploying full-scale systems for clients. Think of it as the least sexy photoshoot, ever.

These images were then labeled and used by a data science  team lead by Robin Schlenga to train a convolutional neural network accelerated by NVIDIA GPUs. Launched last September after a year-and-a-half of development, Opseyes allows customers to use their smartphone to take a picture of a sample through a microscope and get answers within minutes.

It’s just another example of how expertise in companies seemingly far outside of tech can be transformed into an AI. After all, “no one wants to have to wait a week to know if it’s safe to take a sip of water,” Arndt says.

Bottoms up.

Featured image credit: Opseyes

The post In the Drink of an AI: Startup Opseyes Instantly Analyzes Wastewater appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators

As data grows in volume, velocity and complexity, the field of data science is booming.

There’s an ever-increasing demand for talent and skillsets to help design the best data science solutions. However, expertise that can help drive these breakthroughs requires students to have a foundation in various tools, programming languages, computing frameworks and libraries.

That’s why the NVIDIA Deep Learning Institute has released the first version of its Accelerated Data Science Teaching Kit for qualified educators. The kit has been co-developed with Polo Chau, from the Georgia Institute of Technology, and Xishuang Dong, from Prairie View A&M University, two highly regarded researchers and educators in the fields of data science and accelerating data analytics with GPUs.

“Data science unlocks the immense potential of data in solving societal challenges and large-scale complex problems across virtually every domain, from business, technology, science and engineering to healthcare, government and many more,” Chau said.

The free teaching materials cover fundamental and advanced topics in data collection and preprocessing, accelerated data science with RAPIDS, GPU-accelerated machine learning, data visualization and graph analytics.

Content also covers culturally responsive topics such as fairness and data bias, as well as challenges and important individuals from underrepresented groups.

This first release of the Accelerated Data Science Teaching Kit includes focused modules covering:

  • Introduction to Data Science and RAPIDS
  • Data Collection and Pre-processing (ETL)
  • Data Ethics and Bias in Data Sets
  • Data Integration and Analytics
  • Data Visualization
  • Distributed Computing with Hadoop, Hive, Spark and RAPIDS

More modules are planned for future releases.

All modules include lecture slides, lecture notes and quiz/exam problem sets, and most modules include hands-on labs with included datasets and sample solutions in Python and interactive Jupyter notebook formats. Lecture videos will be included for all modules in later releases.

DLI Teaching Kits also come bundled with free GPU resources in the form of Amazon Web Services credits for educators and their students, as well as free DLI online, self-paced courses and certificate opportunities.

“Data science is such an important field of study, not just because it touches every domain and vertical, but also because data science addresses important societal issues relating to gender, race, age and other ethical elements of humanity,“ said Dong, whose school is a Historically Black College/University.

This is the fourth teaching kit released by the DLI, as part of its program that has reached 7,000 qualified educators so far. Learn more about NVIDIA Teaching Kits.

The post NVIDIA Deep Learning Institute Releases New Accelerated Data Science Teaching Kit for Educators appeared first on The Official NVIDIA Blog.

Read More