Here Comes the Sun: NASA Scientists Talk Solar Physics

Here Comes the Sun: NASA Scientists Talk Solar Physics

Michael Kirk and Raphael Attie, scientists at NASA’s Goddard Space Flight Center, regularly face terabytes of data in their quest to analyze images of the sun.

This computational challenge, which could take a year or more on a CPU, has been reduced to less than a week on Quadro RTX data science workstations. Kirk and Attie spoke to AI Podcast host Noah Kravitz about the workflow they follow to study these images, and what they hope to find.

The lessons they’ve learned are useful for those in both science and industry grappling with how to best put torrents of data to work.

The researchers study images captured by telescopes on satellites, such as the Solar Dynamics Observatory spacecraft, as well as those from ground-based observatories.

They study these images to identify particles in Earth’s orbit that could damage interplanetary spacecraft, and to track solar surface flows, which allow them to develop models predicting weather in space.

Currently, these images are taken in space and sent to Earth for computation. But Kirk and Attie aim to shoot for the stars in the future: the goal is the ultimate form of edge computing, putting high-performance computers in space.

Key Points From This Episode:

  • The primary instrument that Kirk and Attie use to see images of the sun is the Solar Dynamics Observatory, a spacecraft that has four telescopes to take images of the extreme ultraviolet light of the sun, as well as an additional instrument to measure its magnetic fields.
  • Researchers such as Kirk and Attie have developed machine learning algorithms for a variety of projects, such as creating synthetic images of the sun’s surface and its flow fields.

Tweetables:

“We take an image about once every 1.3 seconds of the sun … that entire data archive — we’re sitting at about 18 petabytes right now.” — Michael Kirk [6:50]

“What AI is really offering us is a way to crunch through terabytes of data that are very difficult to move back to Earth.” — Raphael Attie [34:34]

You Might Also Like

How the Breakthrough Listen Harnessed AI in the Search for Aliens

UC Berkeley’s Gerry Zhang talks about his work using deep learning to analyze signals from space for signs of intelligent extraterrestrial civilizations. And while we haven’t found aliens yet, the doctoral student has already made some extraordinary discoveries.

Forget Storming Area 51, AI’s Helping Astronomers Scour the Skies for Habitable Planets

Astronomer Olivier Guyon and professor Damien Gratadour speak about the quest to discover nearby habitable planets using GPU-powered extreme adaptive optics in very large telescopes.

Astronomers Turn to AI as New Telescopes Come Online 

To turn the vast quantities of data that will be pouring out of new telescopes into world-changing scientific discoveries, Brant Robertson, a visiting professor at the Institute for Advanced Study in Princeton and an associate professor of astronomy at UC Santa Cruz, is turning to AI.

The post Here Comes the Sun: NASA Scientists Talk Solar Physics appeared first on The Official NVIDIA Blog.

Read More

Clarifying Training Time, Startup Launches AI-Assisted Data Annotation

Clarifying Training Time, Startup Launches AI-Assisted Data Annotation

Creating a labeled dataset for training an AI application can hit the brakes on a company’s speed to market. Clarifai, an image and text recognition startup, aims to put that obstacle in the rearview mirror.

The New York City-based company today announced the general availability of its AI-assisted data labeling service, dubbed Clarifai Labeler. The company offers data labeling as a service as well.

Founded in 2013, Clarifai entered the image-recognition market in its early days. Since that time, the number of companies exploiting unstructured data for business advantages has swelled, creating a wave of demand for data scientists. And with industry disruption from image and text recognition spanning agriculture, retail, banking, construction, insurance and beyond, much is at stake.

“High-quality AI models start with high-quality dataset annotation. We’re able to use AI to make labeling data an order of magnitude faster than some of the traditional technologies out there,” said Alfredo Ramos, a senior vice president at Clarifai.

Backed by NVIDIA GPU Ventures, Clarifai is gaining traction in retail, banking and insurance, as well as for applications in federal, state and local agencies, he says.

AI Labeling with Benefits

Clarifai’s Labeler shines at labeling video footage. The tool integrates a statistical method so that an annotated object — one with a bounding box around it — can be tracked as it moves throughout the video.

Since each second of video is made up of multiple frames of images, the tracking capabilities result in increased accuracy and huge improvements in the quantity of annotations per object, as well as a drastic reduction in the time to label large volumes of data.

The new Labeler was most recently used to annotate days of video footage to build a model to detect whether people were wearing face masks, which resulted in a million annotations in less than four days.

Traditionally, this would’ve taken a human workforce six weeks to label the individual frames. With Labeler, they created 1 million annotations 10 times faster, said Ramos.

Clarifai uses an array of NVIDIA V100 Tensor Core GPUs onsite for development of models, and it taps into NVIDIA T4 GPUs in the cloud for inference.

Star-Powered AI 

Ramos reports to one of AI’s academic champions. CEO and founder Matthew Zeiler took the industry by storm when his neural networks dominated the ImageNet Challenge in 2013. That became his launchpad for Clarifai.

Zeiler has since evolved his research into developer-friendly products that allow enterprises to quickly and easily integrate AI into their workflows and customer experiences. The company continues to attract new customers, most recently, with the release of its natural language processing product.

While much has changed in the industry, Clarifai’s focus on research hasn’t.

“We have a sizable team of researchers, and we have become adept at taking some of the best research out there in the academic world and very quickly deploying it for commercial use,” said Ramos.

 

Clarifai is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

Image credit: Chris Curry via Unsplash.

The post Clarifying Training Time, Startup Launches AI-Assisted Data Annotation appeared first on The Official NVIDIA Blog.

Read More

Mass General’s Martinos Center Adopts AI for COVID, Radiology Research

Mass General’s Martinos Center Adopts AI for COVID, Radiology Research

Academic medical centers worldwide are building new AI tools to battle COVID-19 —  including at Mass General, where one center is adopting NVIDIA DGX A100 AI systems to accelerate its work.

Researchers at the hospital’s Athinoula A. Martinos Center for Biomedical Imaging are working on models to segment and align multiple chest scans, calculate lung disease severity from X-ray images, and combine radiology data with other clinical variables to predict outcomes in COVID patients.

Built and tested using Mass General Brigham data, these models, once validated, could be used together in a hospital setting during and beyond the pandemic to bring radiology insights closer to the clinicians tracking patient progress and making treatment decisions.

“While helping hospitalists on the COVID-19 inpatient service, I realized that there’s a lot of information in radiologic images that’s not readily available to the folks making clinical decisions,” said Matthew D. Li, a radiology resident at Mass General and member of the Martinos Center’s QTIM Lab. “Using deep learning, we developed an algorithm to extract a lung disease severity score from chest X-rays that’s reproducible and scalable — something clinicians can track over time, along with other lab values like vital signs, pulse oximetry data and blood test results.”

The Martinos Center uses a variety of NVIDIA AI systems, including NVIDIA DGX-1, to accelerate its research. This summer, the center will install NVIDIA DGX A100 systems, each built with eight NVIDIA A100 Tensor Core GPUs and delivering 5 petaflops of AI performance.

“When we started working on COVID model development, it was all hands on deck. The quicker we could develop a model, the more immediately useful it would be,” said Jayashree Kalpathy-Cramer, director of the QTIM lab and the Center for Machine Learning at the Martinos Center. “If we didn’t have access to the sufficient computational resources, it would’ve been impossible to do.”

Comparing Notes: AI for Chest Imaging

COVID patients often get imaging studies — usually CT scans in Europe, and X-rays in the U.S. — to check for the disease’s impact on the lungs. Comparing a patient’s initial study with follow-ups can be a useful way to understand whether a patient is getting better or worse.

But segmenting and lining up two scans that have been taken in different body positions or from different angles, with distracting elements like wires in the image, is no easy feat.

Bruce Fischl, director of the Martinos Center’s Laboratory for Computational Neuroimaging, and Adrian Dalca, assistant professor in radiology at Harvard Medical School, took the underlying technology behind Dalca’s MRI comparison AI and applied it to chest X-rays, training the model on an NVIDIA DGX system.

“Radiologists spend a lot of time assessing if there is change or no change between two studies. This general technique can help with that,” Fischl said. “Our model labels 20 structures in a high-resolution X-ray and aligns them between two studies, taking less than a second for inference.”

This tool can be used in concert with Li and Kalpathy-Cramer’s research: a risk assessment model that analyzes a chest X-ray to assign a score for lung disease severity. The model can provide clinicians, researchers and infectious disease experts with a consistent, quantitative metric for lung impact, which is described subjectively in typical radiology reports.

Trained on a public dataset of over 150,000 chest X-rays, as well as a few hundred COVID-positive X-rays from Mass General, the severity score AI is being used for testing by four research groups at the hospital using the NVIDIA Clara Deploy SDK. Beyond the pandemic, the team plans to expand the model’s use to more conditions, like pulmonary edema, or wet lung.

Comparing the AI lung disease severity score, or PXS, between images taken at different stages can help clinicians track changes in a patient’s disease over time. (Image from the researchers’ paper in Radiology: Artificial Intelligence, available under open access.)

Foreseeing the Need for Ventilators

Chest imaging is just one variable in a COVID patient’s health. For the broader picture, the Martinos Center team is working with Brandon Westover, executive director of Mass General Brigham’s Clinical Data Animation Center.

Westover is developing AI models that predict clinical outcomes for both admitted patients and outpatient COVID cases, and Kalpathy-Cramer’s lung disease severity score could be integrated as one of the clinical variables for this tool.

The outpatient model analyzes 30 variables to create a risk score for each of hundreds of patients screened at the hospital network’s respiratory infection clinics — predicting the likelihood a patient will end up needing critical care or dying from COVID.

For patients already admitted to the hospital, a neural network predicts the hourly risk that a patient will require artificial breathing support in the next 12 hours, using variables including vital signs, age, pulse oximetry data and respiratory rate.

“These variables can be very subtle, but in combination can provide a pretty strong indication that a patient is getting worse,” Westover said. Running on an NVIDIA Quadro RTX 8000 GPU, the model is accessible through a front-end portal clinicians can use to see who’s most at risk, and which variables are contributing most to the risk score.

Better, Faster, Stronger: Research on NVIDIA DGX

Fischl says NVIDIA DGX systems help Martinos Center researchers more quickly iterate, experimenting with different ways to improve their AI algorithms. DGX A100, with NVIDIA A100 GPUs based on the NVIDIA Ampere architecture, will further speed the team’s work with third-generation Tensor Core technology.

“Quantitative differences make a qualitative difference,” he said. “I can imagine five ways to improve our algorithm, each of which would take seven hours of training. If I can turn those seven hours into just an hour, it makes the development cycle so much more efficient.”

The Martinos Center will use NVIDIA Mellanox switches and VAST Data storage infrastructure, enabling its developers to use NVIDIA GPUDirect technology to bypass the CPU and move data directly into or out of GPU memory, achieving better performance and faster AI training.

“Having access to this high-capacity, high-speed storage will allow us to to analyze raw multimodal data from our research MRI, PET and MEG scanners,” said Matthew Rosen, assistant professor in radiology at Harvard Medical School, who co-directs the Center for Machine Learning at the Martinos Center. “The VAST storage system, when linked with the new A100 GPUs, is going to offer an amazing opportunity to set a new standard for the future of intelligent imaging.”

To learn more about how AI and accelerated computing are helping healthcare institutions fight the pandemic, visit our COVID page.

Main image shows chest x-ray and corresponding heat map, highlighting areas with lung disease. Image from the researchers’ paper in Radiology: Artificial Intelligence, available under open access.

The post Mass General’s Martinos Center Adopts AI for COVID, Radiology Research appeared first on The Official NVIDIA Blog.

Read More

Nerd Watching: GPU-Powered AI Helps Researchers Identify Individual Birds

Nerd Watching: GPU-Powered AI Helps Researchers Identify Individual Birds

Anyone can tell an eagle from an ostrich. It takes a skilled birdwatcher to tell a chipping sparrow from a house sparrow from an American tree sparrow.

Now researchers are using AI to take this to the next level — identifying individual birds.

André Ferreira, a Ph.D. student at France’s Centre for Functional and Evolutionary Ecology, harnessed an NVIDIA GeForce RTX 2070 to train a powerful AI that identifies individual birds within the same species.

It’s the latest example of how deep learning has become a powerful tool for wildlife biologists studying a wide range of animals.

Marine biologists with the U.S. National Oceanic and Atmospheric Research Organization use deep learning to identify and track the endangered North Atlantic Right Whale. Zoologist Dan Rubenstein uses deep learning to distinguish between individuals in herds of Grevy’s Zebras.

The sociable weaver isn’t endangered. But understanding the role of an individual in a group is key to understanding how the birds, native to Southern Africa, work together to build their nests.

The problem: it’s hard to tell the small, rust-colored birds apart, especially when trying to capture their activities in the wild.

In a paper released last week, Ferreira detailed how he and a team of researchers trained a convolutional neural network to identify individual birds.

Ferreira built his model using Keras, a popular open-source neural network library, running on a GeForce RTX 2070 GPU.

He then teamed up with researchers at Germany’s Max Planck Institute of Animal Behavior. Together, they adapted the model to identify wild great tits and captive zebra finches, two other widely studied bird species.

To train their models — a crucial step towards building any modern deep-learning-based AI — researchers made feeders equipped with cameras.

The researchers fitted birds with electronic tags, which triggered sensors in the feeders alerting researchers to the bird’s identity.

This data gave the model a “ground truth” that it could check against for accuracy.

The team’s AI was able to identify individual sociable weavers and wild great tits more than 90 percent of the time. And it identified captive zebra finches 87 percent of the time.

For bird researchers, the work promises several key benefits.

Using cameras and other sensors to track birds allows researchers to study bird behavior much less invasively.

With less need to put researchers in the field, the technique allows researchers to track bird behavior over more extended periods.

Next: Ferreira and his colleagues are working to build AI that can recognize individual birds it has never seen before, and better track groups of birds.

Birdwatching may never be the same.

Featured image credit: Bernard DuPont, some rights reserved.

The post Nerd Watching: GPU-Powered AI Helps Researchers Identify Individual Birds appeared first on The Official NVIDIA Blog.

Read More

AI Goes Uptown: A Tour of Smart Cities Around the Globe 

There are as many ways to define a smart city as there are cities on the road to being smart.

From London and Singapore to Seat Pleasant, Maryland, they vary widely. Most share some common characteristics.

Every city wants to be smart about being a great place to live. So, many embrace broad initiatives for connecting their citizens to the latest 5G and fiber optic networks, expanding digital literacy and services.

Most agree that a big part of being smart means using technology to make their cities more self-aware, automated and efficient.

That’s why a smart city is typically a kind of municipal Internet of Things — a network of cameras and sensors that can see, hear and even smell. These sensors, especially video cameras, generate massive amounts of data that can serve many civic purposes like helping traffic flow smoothly.

Cities around the globe are turning to AI to sift through that data in real time for actionable insights. And, increasingly, smart cities build realistic 3D simulations of themselves, digital twins to test out ideas of what they might look like in the future.

“We define a smart city as a place applying advanced technology to improve the quality of life for people who live in it,” said Sokwoo Rhee, who’s worked on more than 200 smart city projects in 25 countries as an associate director for cyber-physical systems innovation in the U.S. National Institute of Standards and Technology.

U.S., London Issue Smart Cities Guidebooks

At NIST, Rhee oversees work on a guide for building smart cities. Eventually it will include reports on issues and case studies in more than two dozen areas from public safety to water management systems.

Across the pond, London describes its smart city efforts in a 60-page document that details many ambitious goals. Like smart cities from Dubai to San Jose in Silicon Valley, it’s a metro-sized work in progress.

smart london
An image from the Smart London guide.

“We are far from the ideal at the moment with a multitude of systems and a multitude of vendors making the smart city still somewhat complex and fragmented,” said Andrew Hudson-Smith, who is chair of digital urban systems at The Centre for Advanced Spatial Analysis at University College London and sits on a board that oversees London’s smart city efforts.

Living Labs for AI

In a way, smart cities are both kitchen sinks and living labs of technology.

They host everything from air-quality monitoring systems to repositories of data cleared for use in shared AI projects. The London Datastore, for example, already contains more than 700 publicly available datasets.

One market researcher tracks a basket of 13 broad areas that define a smart city from smart streetlights to connected garbage cans. A smart-parking vendor in Stockholm took into account 24 factors — including the number of Wi-Fi hotspots and electric-vehicle charging stations — in its 2019 ranking of the world’s 100 smartest cities. (Its top five were all in Scandinavia.)

“It’s hard to pin it down to a limited set of technologies because everything finds its way into smart cities,” said Dominique Bonte, a managing director at market watcher ABI Research. Among popular use cases, he called out demand-response systems as “a huge application for AI because handling fluctuating demand for electricity and other services is a complex problem.”

smart city factors from EasyPark
Sweden’s EasyPark lists 24 factors that define a smart city.

Because it’s broad, it’s also big. Market watchers at Navigant Research expect the global market for smart-city gear to grow from $97.4 billion in annual revenue in 2019 to $265.4 billion by 2028 at a compound annual growth rate of 11.8 percent.

It’s still early days. In a January 2019 survey of nearly 40 U.S. local and state government managers, more than 80 percent thought a municipal Internet of Things will have significant impact for their operations, but most were still in a planning phase and less than 10 percent had active projects.

smart city survey by NIST
Most smart cities are still under construction, according to a NIST survey.

“Smart cities mean many things to many people,” said Saurabh Jain, product manager of Metropolis, NVIDIA’s GPU software stack for vertical markets such as smart cities.

“Our focus is on building what we call the AI City with the real jobs that can be done today with deep learning, tapping into the massive video and sensor datasets cities generate,” he said.

For example, Verizon deployed on existing streetlights in Boston and Sacramento video nodes using the NVIDIA Jetson TX1 to analyze and improve traffic flow, enhance pedestrian safety and optimize parking.

“Rollout is happening fast across the globe and cities are expanding their lighting infrastructure to become a smart-city platform … helping to create efficiency savings and a new variety of citizen services,” said David Tucker, head of product management in the Smart Communities Group at Verizon in a 2018 article.

Smart Streetlights for Smart Cities

Streetlights will be an important part of the furniture of tomorrow’s smart city.

So far, only a few hundred are outfitted with various mixes of sensors and Wi-Fi and cellular base stations. The big wave is yet to come as the estimated 360 million posts around the world slowly upgrade to energy-saving LED lights.

smart streetlight EU
A European take on a smart streetlight.

In a related effort, the city of Bellevue, Washington, tested a computer vision system from Microsoft Research to improve traffic safety and reduce congestion. Researchers at the University of Wollongong recently described similar work using NVIDIA Jetson TX2 modules to track the flow of vehicles and pedestrians in Liverpool, Australia.

Airports, retail stores and warehouses are already using smart cameras and AI to run operations more efficiently. They are defining a new class of edge computing networks that smart cities can leverage.

For example, Seattle-Tacoma International Airport (SEA) will roll out an AI system from startup Assaia that uses NVIDIA GPUs to speed the time to turn around flights.

“Video analytics is crucial in providing full visibility over turnaround activities as well as improving safety,” said an SEA manager in a May report.

Nashville, Zurich Explore the Simulated City

Some smart cities are building digital twins, 3D simulations that serve many purposes.

For example, both Zurich and Nashville will someday let citizens and city officials don goggles at virtual town halls to see simulated impacts of proposed developments.

“The more immersive and fun an experience, the more you increase engagement,” said Dominik Tarolli, director of smart cities at Esri, which is supplying simulation software that runs on NVIDIA GPUs for both cities.

Cities as far apart in geography and population as Singapore and Rennes, France, built digital twins using a service from Dassault Systèmes.

“We recently signed a partnership with Hong Kong and presented examples for a walkability study that required a 3D simulation of the city,” said Simon Huffeteau, a vice president working on smart cities for Dassault.

Europe Keeps an AI on Traffic

Many smart cities get started with traffic control. London uses digital signs to post speed limits that change to optimize traffic flow. It also uses license-plate recognition to charge tolls for entering a low-emission zone in the city center.

Cities in Belgium and France are considering similar systems.

“We think in the future cities will ban the most polluting vehicles to encourage people to use public transportation or buy electric vehicles,” said Bonte of ABI Research. “Singapore is testing autonomous shuttles on a 5.7-mile stretch of its streets,” he added.

Nearby, Jakarta uses a traffic-monitoring system from Nodeflux, a member of NVIDIA’s Inception program that nurtures AI startups. The software taps AI and the nearly 8,000 cameras already in place around Jakarta to recognize license plates of vehicles with unpaid taxes.

The system is one of more than 100 third-party applications that run on Metropolis, NVIDIA’s application framework for the Internet of Things.

Unsnarling Traffic in Israel and Kansas City

Traffic was the seminal app for a smart-city effort in Kansas City that started in 2015 with a $15 million smart streetcar. Today, residents can call up digital dashboards detailing current traffic conditions around town.

And in Israel, the city of Ashdod deployed AI software from viisights. It helps understand patterns in a traffic monitoring system powered by NVIDIA Metropolis to ensure safety for citizens.

NVIDIA created the AI City Challenge to advance work on deep learning as a tool to unsnarl traffic. Now in its fourth year, it draws nearly 1,000 researchers competing in more than 300 teams that include members from multiple city and state traffic agencies.

The event spawned CityFlow, one of the world’s largest datasets for applying AI to traffic management. It consists of more than three hours of synchronized high-definition videos from 40 cameras at 10 intersections, creating 200,000 annotated bounding boxes around vehicles captured from different angles under various conditions.

Drones to the Rescue in Maryland

You don’t have to be a big city with lots of money to be smart. Seat Pleasant, Maryland, a Washington, D.C., suburb of less than 5,000 people, launched a digital hub for city services in August 2017.

Since then it installed intelligent lighting, connected waste cans, home health monitors and video analytics to save money, improve traffic safety and reduce crime. It’s also become the first U.S. city to use drones for public safety, including plans for life-saving delivery of emergency medicines.

The idea got its start when Mayor Eugene Grant, searching for ways to recover from the 2008 economic downturn, attended an event on innovation villages.

“Seat Pleasant would like to be a voice for small cities in America where 80 percent have less than 10,000 residents,” said Grant. “Look at these cities as test beds of innovation … living labs,” he added.

Seat Pleasant Mayor Eugene Grant
Mayor Grant of Seat Pleasant aims to set an example of how small towns can become smart cities.

Rhee of NIST agrees. “I’m seeing a lot of projects embracing a broadening set of emerging technologies, making smart cities like incubation programs for new businesses like air taxis and autonomous vehicles that can benefit citizens,” he said, noting that even rural communities will get into the act.

Simulating a New Generation of Smart Cities

When the work is done, go to the movies. Hollywood might provide a picture of the next horizon in the same way it sparked some of the current work.

Simulated smart city
Esri’s tools are used to simulate cities for movies as well as the real world.

Flicks including Blade Runner 2049, Cars, Guardians of the Galaxy and Zootopia used a program called City Engine from startup Procedural that enables a rule-based approach to constructing simulated cities.

Their work caught the eye of Esri, which acquired the company and bundled its program with its ArcGIS Urban planning tool, now a staple for hundreds of real cities worldwide.

“Games and movies make people expect more immersive experiences, and that requires more computing,” said Tarolli, a co-founder of Procedural and now Esri’s point person on smart cities.

The post AI Goes Uptown: A Tour of Smart Cities Around the Globe  appeared first on The Official NVIDIA Blog.

Read More

Deep Learning on Tap: NVIDIA Engineer Turns to AI, GPU to Invent New Brew

Some dream of code. Others dream of beer. NVIDIA’s Eric Boucher does both at once, and the result couldn’t be more aptly named.

Full Nerd #1 is a crisp, light-bodied blonde ale perfect for summertime quaffing.

Eric, an engineer in the GPU systems software kernel driver team, went to sleep one night in May wrestling with two problems.

One, he had to wring key information from the often cryptic logs for the systems he oversees to help his team respond to issues faster.

The other: the veteran home brewer wanted a way to brew new kinds of beer.

“I woke up in the morning and I knew just what to do,” Boucher said. “Basically I got both done on one night’s broken sleep.”

Both solutions involved putting deep learning to work on a NVIDIA TITAN V GPU. Such powerful gear tends to encourage this sort of parallel processing, it seems.

Eric, a native of France now based near Sacramento, Calif., began homebrewing two decades ago, inspired by a friend and mentor at Sun Microsystems. He took a break from it when his children were first born.

Now that they’re older, he’s begun brewing again in earnest, using gear in both his garage and backyard, turning to AI for new recipes this spring.

Of course, AI has been used in the past to help humans analyze beer flavors, and even create wild new craft beer names. Eric’s project, however, is more ambitious, because it’s relying on AI to create new beer recipes.

You’ve Got Ale — GPU Speeds New Brew Ideas

For training data, Eric started with the all-grain ale recipes from MoreBeer, a hub for brewing enthusiasts, where he usually shops for recipe kits and ingredients.

Eric focused on ales because they’re relatively easy and quick to brew, and encompass a broad range of different styles, from hearty Irish stout to tangy and refreshing Kölsch.

He used wget — an open source program that retrieves content from the web — to save four index pages of ale recipes.

Then, using a Python script, he filtered the downloaded HTML pages and downloaded the linked recipe PDFs. He then converted the PDFs to plain text and used another Python script to interpret the text and generate recipes in a standardized format.

He fed these 108 recipes — including one for Russian River Brewing’s legendary Pliny the Elder IPA — to Textgenrnn, a recurrent neural network, a type of neural network that can be applied to a sequence of data to help guess what should come next.

And, because no one likes to wait for good beer, he ran it on an NVIDIA TITAN V GPU. Eric estimates it cuts the time to learn from the recipes database to seven minutes from one hour and 45 minutes using a CPU alone.

After a little tuning, Eric generated 10 beer recipes. They ranged from dark stouts to yellowish ales, and in flavor from bitter to light.

To Eric’s surprise, most looked reasonable (though a few were “plain weird and impossible to brew” like a recipe that instructed him to wait 45 days with hops in the wort, or unfermented beer, before adding the yeast).

Speed of Light (Beer)

With the approaching hot California summer in mind, Eric selected a blonde ale.

He was particularly intrigued because the recipe suggested adding Warrior, Cascade, and Amarillo hops — the flowers of the herbaceous perennial Humulus lupulus that gives good beer a range of flavors, from bitter to citrusy — an “intriguing schedule.”

The result, Eric reports, was refreshing, “not too sweet, not too bitter,” with “a nice, fresh hops smell and a long, complex finish.”

He dubbed the result Full Nerd #1.

The AI-generated brew became the latest in a long line of brews with witty names Eric has produced, including a bourbon oak-infused beer named, appropriately enough, “The Groot Beer,” in honor of the tree-like creature from Marvel’s “Guardians of the Galaxy.”

Eric’s next AI brewing project: perhaps a dark stout, for winter, or a lager, a light, crisp beer that requires months of cold storage to mature.

For now, however, there’s plenty of good brew to drink. Perhaps too much. Eric usually shares his creations with his martial arts buddies. But with social distancing in place amidst the global COVID-19 pandemic, the five gallons, or forty pints, is more than the light drinker knows what to do with.

Eric, it seems, has found a problem deep learning can’t help him with. Bottoms up.

The post Deep Learning on Tap: NVIDIA Engineer Turns to AI, GPU to Invent New Brew appeared first on The Official NVIDIA Blog.

Read More

Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE

Self-driving trucks are coming to an interstate near you.

Autonomous trucking startup TuSimple and truck maker Navistar recently announced they will build self-driving semi trucks, powered by the NVIDIA DRIVE AGX platform. The collaboration is one of the first to develop autonomous trucks, set to begin production in 2024.

Over the past decade, self-driving truck developers have relied on traditional trucks retrofitted with the sensors, hardware and software necessary for autonomous driving. Building these trucks from the ground up, however, allows for companies to custom-build them for the needs of a self-driving system as well as take advantage of the infrastructure of a mass production truck manufacturer.

This transition is the first step from research to widespread deployment, said Chuck Price, chief product officer at TuSimple.

“Our technology, developed in partnership with NVIDIA, is ready to go to production with Navistar,” Price said. “This is a significant turning point for the industry.”

Tailor-Made Trucks

Developing a truck to drive on its own takes more than a software upgrade.

Autonomous driving relies on redundant and diverse deep neural networks, all running simultaneously to handle perception, planning and actuation. This requires massive amounts of compute.

The NVIDIA DRIVE AGX platform delivers high-performance, energy-efficient compute to enable AI-powered and autonomous driving capabilities. TuSimple has been using the platform in its test vehicles and pilots, such as its partnership with the United States Postal Service.

Building dedicated autonomous trucks makes it possible for TuSimple and Navistar to develop a centralized architecture optimized for the power and performance of the NVIDIA DRIVE AGX platform. The platform is also automotive grade, meaning it is built to withstand the wear and tear of years driving on interstate highways.

Invaluable Infrastructure

In addition to a customized architecture, developing an autonomous truck in partnership with a manufacturer opens up valuable infrastructure.

Truck makers like Navistar provide nationwide support for their fleets, with local service centers and vehicle tracking. This network is crucial for deploying self-driving trucks that will criss-cross the country on long-haul routes, providing seamless and convenient service to maintain efficiency.

TuSimple is also building out an HD map network of the nation’s highways for the routes its vehicles will travel. Combined with the widespread fleet management network, this infrastructure makes its autonomous trucks appealing to a wide variety of partners — UPS, U.S. Xpress, Penske Truck Leasing and food service supply chain company McLane Inc., a Berkshire Hathaway company, have all signed on to this autonomous freight network.

And backed by the performance of NVIDIA DRIVE AGX, these vehicles will continue to improve, delivering safer, more efficient logistics across the country.

“We’re really excited as we move into production to have a partner like NVIDIA with us the whole way,” Price said.

The post Fleet Dreams Are Made of These: TuSimple and Navistar to Build Autonomous Trucks Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage

During a stroke, a patient loses an estimated 1.9 million brain cells every minute, so interpreting their CT scan even one second quicker is vital to maintaining their health.

To save precious time, Taiwan-based medical imaging startup Deep01 has created an AI-based medical imaging software, called DeepCT, to evaluate acute intracerebral hemorrhage (ICH), a type of stroke. The system works with 95 percent accuracy in just 30 seconds per case — about 10 times faster than competing methods.

Founded in 2016, Deep01 is the first AI company in Asia to have FDA clearances in both the U.S. and Taiwan. It’s a member of NVIDIA Inception, a program that helps startups develop, prototype and deploy their AI or data science technology and get to market faster.

The startup recently raised around $3 million for DeepCT, which detects suspected areas of bleeding around the brain and annotates where they’re located on CT scans, notifying physicians of the results.

The software was trained using 60,000 medical images that displayed all types of acute ICH. Deep01 uses a self-developed deep learning framework that runs images and trains the model on NVIDIA GPUs.

“Working with NVIDIA’s robust AI computing hardware, in addition to software frameworks like TensorFlow and PyTorch, allows us to deliver excellent AI inference performance,” said David Chou, founder and CEO of the company.

Making Quick Diagnosis Accessible and Affordable

Strokes are the world’s second-most common cause of death. When stroke patients are ushered into the emergency room, doctors must quickly determine whether the brain is bleeding and what next steps for treatment should be.

However, many hospitals lack enough manpower to perform such timely diagnoses, since only some emergency room doctors specialize in reading CT scans. Because of this, Deep01 was founded, according to Chou, with the mission of offering affordable AI-based solutions to medical institutions.

The 30-second speed with which DeepCT completes interpretation can help medical practitioners prioritize the patients in most urgent need for treatment.

Helpful for Facilities of All Types and Sizes

DeepCT has helped doctors evaluate more than 5,000 brain scans and is being used in nine medical institutions in Taiwan, ranging from small hospitals to large-scale medical centers.

“The lack of radiologists is a big issue even in large-scale medical centers like the one I work at, especially during late-night shifts when fewer staff are on duty,” said Tseng-Lung Yang, senior radiologist at Kaohsiung Veterans General Hospital in Taiwan.

Geng-Wang Liaw, an emergency physician at Yeezen General Hospital — a smaller facility in Taiwan — agreed that Deep01’s technology helps relieve physical and mental burdens for doctors.

“Doctors in the emergency room may misdiagnose a CT scan at times,” he said. “Deep01’s solution stands by as an assistant 24/7, to give doctors confidence and reduce the possibility for medical error.”

Beyond ICH, Deep01 is at work on expanding its technology to identify midline shift, a pathological finding that occurs when there’s increased pressure on the brain and increases mortality.

The post Stop the Bleeding: AI Startup Deep01 Assists Physicians Evaluate Brain Hemorrhage appeared first on The Official NVIDIA Blog.

Read More

AI Explains AI: Fiddler Develops Model Explainability for Transparency

Your online loan application just got declined without explanation. Welcome to the AI black box.

Businesses of all stripes turn to AI for computerized decisions driven by data. Yet consumers using applications with AI get left in the dark on how automated decisions work. And many people working within companies have no idea how to explain the inner workings of AI to customers.

Fiddler Labs wants to change that.

The San Francisco-based startup offers an explainable AI platform that enables companies to explain, monitor and analyze their AI products.

Explainable AI is a growing area of interest for enterprises because those outside of engineering often need to understand how their AI models work.

Using explainable AI, banks can provide reasons to customers for a loan’s rejection, based on data points fed to models, such as maxed credit cards or high debt-to-income ratios. Internally, marketers can strategize about customers and products by knowing more about the data points that drive them.

“This is bridging the gap between hardcore data scientists who are building the models and the business teams using these models to make decisions,” said Anusha Sethuraman, head of product marketing at Fiddler Labs.

Fiddler Labs is a member of NVIDIA Inception, a program that enables companies working in AI and data science with fundamental tools, expertise and marketing support, and helps them get to market faster.

What Is Explainable AI?

Explainable AI is a set of tools and techniques that help explore the math inside an AI model. It can map out the data inputs and their weighted values that were used to arrive at the data output of the model.

All of this, essentially, enables a layperson to study the sausage factory at work on the inside of an otherwise opaque process. The result is explainable AI can help deliver insights into how and why a particular decision was made by a model.

“There’s often a hurdle to get AI into production. Explainability is one of the things that we think can address this hurdle,” Sethuraman said.

With an ensemble of models often at use, creating this is no easy job.

But Fiddler Labs CEO and co-founder Krishna Gade is up to the task. He previously led the team at Facebook that built the “Why am I seeing this post?” feature to help consumers and internal teams understand how its AI works in the Facebook news feed.

He and Amit Paka — a University of Minnesota classmate — joined forces and quit their jobs to start Fiddler Labs. Paka, the company’s chief product officer, was motivated by his experience at Samsung with shopping recommendation apps and the lack of understanding into how these AI recommendation models work.

Explainability for Transparency

Founded in 2018, Fiddler Labs offers explainability for greater transparency in businesses. It helps companies make better informed business decisions through a combination of data, explainable AI and human oversight, according to Sethuraman.

Fiddler’s tech is used by Hired, a talent and job matchmaking site driven by AI. Fiddler provides real-time reporting on how Hired’s AI models are working. It can generate explanations on candidate assessments and provide bias monitoring feedback, allowing Hired to assess its AI.

Explainable AI needs to be quickly available for consumer fintech applications. That enables customer service representatives to explain automated financial decisions — like loan rejections and robo rates — and build trust with transparency about the process.

The algorithms used for explanations require hefty processing. Sethuraman said that Fiddler Labs taps into NVIDIA cloud GPUs to make this possible, saying CPUs aren’t up to the task.

“You can’t wait 30 seconds for the explanations — you want explanations within milliseconds on a lot of different things depending on the use cases,” Sethuraman said.

Visit NVIDIA’s financial services industry page to learn more.

Image credit: Emily Morter, via the Unsplash Photo Community. 

The post AI Explains AI: Fiddler Develops Model Explainability for Transparency appeared first on The Official NVIDIA Blog.

Read More

Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events

While a thunderstorm could knock out your neighborhood’s power for a few hours, a solar storm could knock out electricity grids across all of Earth, possibly taking weeks to recover from.

To try to predict solar storms — which are disturbances on the sun — and their potential effects on Earth, NASA’s Frontier Development Lab (FDL) is running what it calls a geoeffectiveness challenge.

It uses datasets of tracked changes in the magnetosphere — where the Earth’s magnetic field interacts with solar wind — to train AI-powered models that can detect patterns of space weather events and predict their Earth-related impacts.

The training of the models is optimized on NVIDIA GPUs available on Google Cloud, and data exploration is done on RAPIDS, NVIDIA’s open-source suite of software libraries built to execute data science and analytics pipelines entirely on GPUs.

Siddha Ganju, a solutions architect at NVIDIA who was named to Forbes’ 30 under 30 list in 2018, is advising NASA on the AI-related aspects of the challenge.

A deep learning expert, Ganju grew up going to hackathons. She says she’s always been fascinated by how an algorithm can read in between the lines of code.

Now, she’s applying her knowledge to NVIDIA’s automotive and healthcare businesses, as well NASA’s AI technical steering committee. She’s also written a book on practical uses of deep learning, published last October.

Modeling Space Weather Impacts with AI

Ganju’s work with the FDL began in 2017, when its founder, James Parr, asked her to start advising the organization. Her current task, advising the geoeffectiveness challenge, seeks to use machine learning to characterize magnetic field perturbations and model the impact of space weather events.

In addition to solar storms, space weather events can include such activities as solar flares, which are sudden flashes of increased brightness on the sun, and solar wind, a stream of charged particles released from it.

Not all space weather events impact the Earth, said Ganju, but in case one does, we need to be prepared. For example, a single powerful solar storm could knock out our planet’s telephone networks.

“Even if we’re able to predict the impact of an event just 15 minutes in advance, that gives us enough time to sound the alarm and prepare for potential connectivity loss,” said Ganju. “This data can also be useful for satellites to communicate in a better way.”

Exploring Spatial and Temporal Patterns

Solar events can impact parts of the Earth differently due to a variety of factors, Ganju said. With the help of machine learning, the FDL is trying to find spatial and temporal patterns of the effects.

“The datasets we’re working with are huge, since magnetometers collect data on the changes of a magnetic field at a particular location every second,” said Ganju. “Parallel processing using RAPIDS really accelerates our exploration.”

In addition to Ganju, researchers Asti Bhatt, Mark Cheung and Ryan McGranaghan, as well as NASA’s Lika Guhathakurta, are advising the geoeffectiveness challenge team. Its members include Téo Bloch, Banafsheh Ferdousi, Panos Tigas and Vishal Upendran.

The researchers use RAPIDS to explore the data quickly. Then, using the PyTorch and TensorFlow software libraries, they train the models for experiments to identify how the latitude of a location, the atmosphere above it, or the way sun rays hit it affect the consequences of a space weather event.

They’re also studying whether an earthly impact happens immediately as the space event occurs, or if it has a delayed effect, as an impact could depend on time-related factors, such as the Earth’s revolutions around the sun or its rotation about its own axis.

To detect such patterns, the team will continue to train the model and analyze data throughout the duration of FDL’s eight-week research sprint, which concludes later this month.

Other FDL projects participating in the sprint, according to Ganju, include the moon for good challenge, which aims to discover the best landing position on the moon. Another is the astronaut health challenge, which is investigating how high-radiation environments can affect an astronaut’s well-being.

The FDL is holding a virtual U.S. Space Science & AI showcase, on August 14, where the 2020 challenges will be presented. Register for the event here.

Feature image courtesy of NASA.

The post Keeping a Watchful AI: NASA Project Aims to Predict Space Weather Events appeared first on The Official NVIDIA Blog.

Read More