Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares

Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares

Looking directly at the sun isn’t recommended — unless you’re doing it with AI, which is what NASA is working on.

The surface of the sun, which is the layer you can see with the eye, is actually bubbly: intense heat creates a boiling reaction, similar to water at high temperature. So when NASA researchers magnify images of the sun with a telescope, they can see tiny blobs, called granules, moving on the surface.

Studying the movement and flows of the granules helps the researchers better understand what’s happening underneath that outer layer of the sun.

The computations for tracking the motion of granules requires advanced imaging techniques. Using data science and GPU computing with NVIDIA Quadro RTX-powered HP Z8 workstations, NASA researchers have developed deep learning techniques to more easily track the flows on the sun’s surface.

RTX Flares Up Deep Learning Performance

When studying how storms and hurricanes form, meteorologists analyze the flows of winds in Earth’s atmosphere. For this same reason, it’s important to measure the flows of plasma in the sun’s atmosphere to learn more about the short- and long-term evolution of our nearest star.

This helps NASA understand and anticipate events like solar flares, which can affect power grids, communication systems like GPS or radios, or even put space travel at risk because of the intense radiation and charged particles associated with space weather.

“It’s like predicting earthquakes,” said Michael Kirk, research astrophysicist at NASA. “Since we can’t see very well beneath the surface of the sun, we have to take measurements from the flows on the exterior to infer what is happening subsurface.”

Granules are transported by plasma motions — hot ionized gas under the surface. To capture these motions, NASA developed customized algorithms best tailored to their solar observations, with a deep learning neural network that observes the granules using images from the Solar Dynamics Observatory, and then learns how to reconstruct their motions.

“Neural networks can generate estimates of plasma motions at resolutions beyond what traditional flow tracking methods can achieve,” said Benoit Tremblay from the National Solar Observatory. “Flow estimates are no longer limited to the surface — deep learning can look for a relationship between what we see on the surface and the plasma motions at different altitudes in the solar atmosphere.”

“We’re training neural networks using synthetic images of these granules to learn the flow fields, so it helps us understand precursor environments that surround the active magnetic regions that can become the source of solar flares,” said Raphael Attie, solar astronomer at NASA’s Goddard Space Flight Center.

NVIDIA GPUs were essential in training the neural networks because NASA needed to complete several training sessions with data preprocessed in multiple ways to develop robust deep learning models, and CPU power was not enough for these computations.

When using TensorFlow on a 72 CPU-core compute node, it took an hour to complete only one pass with the training data. Even in a CPU-based cloud environment, it would still take weeks to train all the models that the scientists needed for a single project.

With an NVIDIA Quadro RTX 8000 GPU, the researchers can complete one training in about three minutes — a 20x speedup. This allows them to start testing the trained models after a day instead of having to wait weeks.

“This incredible speedup enables us to try out different ways to train the models and make ‘stress tests,’ like preprocessing images at different resolutions or introducing synthetic errors to better emulate imperfections in the telescopes,” said Attie. “That kind of accelerated workflow completely changed the scope of what we can afford to explore, and it allows us to be much more daring and creative.”

With NVIDIA Quadro RTX GPUs, the NASA researchers can accelerate workflows for their solar physics projects, and they have more time to conduct thorough research with simulations to gain deeper understandings of the sun’s dynamics.

Learn more about NVIDIA and HP data science workstations, and listen to the AI Podcast with NASA.

The post Rise and Sunshine: NASA Uses Deep Learning to Map Flows on Sun’s Surface, Predict Solar Flares appeared first on The Official NVIDIA Blog.

Read More

Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models

Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models

Cells under a microscope, grapes on a vine and species in a forest are just a few of the things that AI can identify using the image annotation platform created by startup V7 Labs.

Whether a user wants AI to detect and label images showing equipment in an operating room or livestock on a farm, the London-based company offers V7 Darwin, an AI-powered web platform with a trained model that already knows what almost any object looks like, according to Alberto Rizzoli, co-founder of V7 Labs.

It’s a boon for small businesses and other users that are new to AI or want to reduce the costs of training deep learning models with custom data. Users can load their data onto the platform, which then segments objects and annotates them. It also allows for training and deploying models.

V7 Darwin is trained on several million images and optimized on NVIDIA GPUs. The startup is also exploring the use of NVIDIA Clara Guardian, which includes NVIDIA DeepStream SDK intelligent video analytics framework on edge AI embedded systems. So far, it’s piloted laboratory perception, quality inspection, and livestock monitoring projects, using the NVIDIA Jetson AGX Xavier and Jetson TX2 modules for the edge deployment of trained models.

V7 Labs is a member of NVIDIA Inception, a program that provides AI startups with go-to-market support, expertise and technology assistance.

Pixel-Perfect Object Classification

“For AI to learn to see something, you need to give it examples,” said Rizzoli. “And to have it accurately identify an object based on an image, you need to make sure the training sample captures 100 percent of the object’s pixels.”

Annotating and labeling an object based on such a level of “pixel-perfect” granular detail takes just two-and-a-half seconds for V7 Darwin — up to 50x faster than a human, depending on the complexity of the image, said Rizzoli.

Saving time and costs around image annotation is especially important in the context of healthcare, he said. Healthcare professionals must look at hundreds of thousands of X-ray or CT scans and annotate abnormalities, Rizzoli said, but this can be automated.

For example, during the COVID-19 pandemic, V7 Labs worked with the U.K.’s National Health Service and Italy’s San Matteo Hospital to develop a model that detects the severity of pneumonia in a chest X-ray and predicts whether a patient will need to enter an intensive care unit.

The company also published an open dataset with over 6,500 X-ray images showing pneumonia, 500 cases of which were caused by COVID-19.

V7 Darwin can be used in a laboratory setting, helping to detect protocol errors and automatically log experiments.

Application Across Industries

Companies in a wide variety of industries beyond healthcare can benefit from V7’s technology.

“Our goal is to capture all of computer vision and make it remarkably easy to use” said Rizzoli. “We believe that if we can identify a cell under a microscope, we can also identify, say, a house from a satellite. And if we can identify a doctor performing an operation or a lab technician performing an experiment, we can also identify a sculptor or a person preparing a cake.”

Global uses of the platform include assessing the damage of natural disasters, observing the growth of human and animal embryos, detecting caries in dental X-rays, creating autonomous machines to evaluate safety protocols in manufacturing, and allowing farming robots to count their harvests.

Stay up to date with the latest healthcare news from NVIDIA, and explore how AI, accelerated computing, and GPU technology contribute to the worldwide battle against the novel coronavirus on our COVID-19 research hub.

The post Pixel Perfect: V7 Labs Automates Image Annotation for Deep Learning Models appeared first on The Official NVIDIA Blog.

Read More

More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots

More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots

With Lime and Bird scooters covering just about every major U.S. city, you’d think all bets were off for walking. Think again.

Piaggio Fast Forward is staking its future on the idea that people will skip e-scooters or ride-hailing once they take a stroll with its gita robot. A Boston-based subsidiary of the iconic Vespa scooter maker, the company says the recent focus on getting fresh air and walking during the COVID-19 pandemic bodes well for its new robotics concept.

The fashionable gita robot — looking like a curvaceous vintage scooter — can carry up to 40 pounds and automatically keeps stride so you don’t have to lug groceries, picnic goodies or other items on walks. Another mark in gita’s favor: you can exercise in the fashion of those in Milan and Paris, walking sidewalks to meals and stores. “Gita” means short trip in Italian.

The robot may turn some heads on the street. That’s because Piaggio Fast Forward parent Piaggio Group, which also makes Moto Guzzi motorcycles, expects sleek, flashy designs under its brand.

The first idea from Piaggio Fast Forward was to automate something like a scooter to autonomously deliver pizzas. “The investors and leadership came from Italy, and we pitched this idea, and they were just horrified,” quipped CEO and founder Greg Lynn.

If the company gets it right, walking could even become fashionable in the U.S. Early adopters have been picking up gita robots since the November debut. The stylish personal gita robot, enabled by the NVIDIA Jetson TX2 supercomputer on a module, comes in signal red, twilight blue or thunder gray.

Gita as Companion

The robot was designed to follow a person. That means the company didn’t have to create a completely autonomous robot that uses simultaneous localization and mapping, or SLAM, to get around fully on its own, said Lynn. And it doesn’t use GPS.

Instead, a gita user taps a button and the robot’s cameras and sensors immediately capture images that pair it with its leader to follow the person.

Using neural networks and the Jetson’s GPU to perform complex image processing tasks, the gita can avoid collisions with people by understanding how people move  in sidewalk traffic, according to the company. “We have a pretty deep library of what we call ‘pedestrian etiquette,’ which we use to make decisions about how we navigate,” said Lynn.

Pose-estimation networks with 3D point cloud processing allow it to see the gestures of people to anticipate movements, for example. The company recorded thousands of hours of walking data to study human behavior and tune gita’s networks. It used simulation training much the way the auto industry does, using virtual environments. Piaggio Fast Forward also created environments in its labs for training with actual gitas.

“So we know that if a person’s shoulders rotate at a certain degree relative to their pelvis, they are going to make a turn,” Lynn said. “We also know how close to get to people and how close to follow.”

‘Impossible’ Without Jetson 

The robot has a stereo depth camera to understand the speed and distance of moving people, and it has three other cameras for seeing pedestrians for help in path planning. The ability to do split-second inference to make sidewalk navigation decisions was important.

“We switched over and started to take advantage of CUDA for all the parallel processing we could do on the Jetson TX2,” said Lynn.

Piaggio Fast Forward used lidar on its early design prototype robots, which were tethered to a bulky desktop computer, in all costing tens of thousands of dollars. It needed to find a compact, energy-efficient and affordable embedded AI processor to sell its robot at a reasonable price.

“We have hundreds of machines out in the world, and nobody is joy-sticking them out of trouble. It would have been impossible to produce a robot for $3,250 if we didn’t rely on the Jetson platform,” he said.

Enterprise Gita Rollouts

Gita robots have been off to a good start in U.S. sales with early technology adopters, according to the company, which declined to disclose unit sales. They have also begun to roll out in enterprise customer pilot tests, said Lynn.   

Cincinnati-Northern Kentucky International Airport is running gita pilots for delivery of merchandise purchased in airports as well as food and beverage orders from mobile devices at the gates.

Piaggio Fast Forward is also working with some retailers who are experimenting with the gita robots for handling curbside deliveries, which have grown in popularity for avoiding the insides of stores.

The company is also in discussions with residential communities exploring usage of gita robots for the replacement of golf carts to encourage walking in new developments.

Piaggio Fast Forward plans to launch several variations in the gita line of robots by next year.

“Rather than do autonomous vehicles to move people around, we started to think about a way to unlock the walkability of people’s neighborhoods and of businesses,” said Lynn.

 

Piaggio Fast Forward is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post More Than a Wheeling: Boston Band of Roboticists Aim to Rock Sidewalks With Personal Bots appeared first on The Official NVIDIA Blog.

Read More

Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available

Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available

With NVIDIA SHIELD TV, there’s always more to love.

Today’s software update — SHIELD Software Experience Upgrade 8.2 — is the 25th for owners of the original SHIELD TV. It’s a remarkable run, spanning more than 5 years since the first SHIELD TVs launched in May 2015.

The latest upgrade brings a host of new features and improvements for daily streamers and media enthusiasts.

Stream On

One of the fan-favorite features for the newest SHIELD TVs is the AI upscaler. It works by training a neural network model on countless images. Deployed on 2019 SHIELD TVs, the AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper. Hair looks scruffier. Landscapes pop with striking clarity.

To see the difference between “basic upscaling” and “AI-enhanced upscaling” on SHIELD, click the image below and move the slider left and right.

Today’s upgrade adds more UHD 4K upscaling support from 360p to 1440p content. And on 2019 SHIELD TV Pros, we added support for 60fps content. Now SHIELD can upscale live sports on HD TV and HD video from YouTube to 4K with AI. In the weeks ahead, following an update to the NVIDIA Games app in September, we’ll add 4K 60fps upscaling to GeForce NOW.

The customizable menu button on the new SHIELD remote is another popular addition to the family. It’s getting two more actions to customize.

In addition to an action assigned to a single press, users can now configure a custom action for double press and long press. With over 25 actions available, the SHIELD remote is now the most customizable remote for streamers. This powerful feature works with all SHIELD TVs and the SHIELD TV app, available on the Google Play Store and iOS App Store.

More to Be Enthusiastic About

We take pride in SHIELD being a streaming media player enthusiasts can be, well, enthusiastic about. With our latest software upgrade, we’re improving our IR and CEC volume control support.

These upgrades include support for digital projectors, and allowing functionality when SHIELD isn’t active. It also adds IR volume control when using the SHIELD TV app, and when you’ve paired your Google Home with SHIELD. The 2019 SHIELD remote adds IR control to change the input source on TVs, AVRs and soundbars.

Additionally, earlier SHIELD generations — both 2015 and 2017 models — now have an option to match the frame rate of displayed content.

We’ve added native SMBv3 support as well, providing faster and more secure connections between PC and SHIELD. SMBv3 now works without requiring a PLEX media server.

With SHIELD, there’s always more to love. Download the latest software upgrade today, and check out the release notes for a complete list of all the new features and improvements.

The post Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available appeared first on The Official NVIDIA Blog.

Read More

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Self-driving cars continue to amaze passengers as a truly transformative technology. However, in the time of COVID-19, a self-cleaning car may be even more appealing.

Robotaxi startup Voyage introduced its third-generation vehicle, the G3, this week. The  autonomous vehicle, a Chrysler Pacifica Hybrid minivan retrofitted with self-driving technology, is the company’s first designed to operate without a driver and is equipped with an ambulance-grade ultraviolet light disinfectant system to keep passengers healthy.

The new vehicles use the NVIDIA DRIVE AGX Pegasus compute platform to enable the startup’s self-driving AI for robust perception and planning. The automotive-grade platform delivers safety to the core of Voyage’s autonomous fleet.

Given the enclosed space and the proximity of the driver and passengers, ride-hailing currently poses a major risk in a COVID-19 world. By implementing a disinfecting system alongside driverless technology, Voyage is ensuring self-driving cars will continue to develop as a safer, more efficient alternative to everyday mobility.

The G3 vehicle uses an ultraviolet-C system from automotive supplier GHSP to destroy pathogens in the vehicle between rides. UV-C works by inactivating a pathogen’s DNA, blocking its reproductive cycle. It’s been proven to be up to 99.9 percent effective and is commonly used to sterilize ambulances and hospital rooms.

The G3 is production-ready and currently testing on public roads in San Jose, Calif., with production vehicles planned to come out next year.

G3 Compute Horsepower Takes Off with DRIVE AGX Pegasus

Voyage has been using the NVIDIA DRIVE AGX platform in its previous-generation vehicles to power its Shield automatic emergency braking system.

With the G3, the startup is unleashing the 320 TOPS of performance from NVIDIA DRIVE AGX Pegasus to process sensor data and run diverse and redundant deep neural networks simultaneously for driverless operation. Voyage’s onboard computers are automotive grade and safety certified, built to handle the harsh vehicle environment for safe daily operation.

NVIDIA DRIVE AGX Pegasus delivers the compute necessary for level 4 and level 5 autonomous driving.

DRIVE AGX Pegasus is built on two NVIDIA Xavier systems-on-a-chip. Xavier is the first SoC built for autonomous machines and was recently determined by global safety agency TÜV SÜD to meet all applicable requirements of ISO 26262. This stringent assessment means it meets the strictest standard for functional safety.

Xavier’s safety architecture combined with the AI compute horsepower of the DRIVE AGX Pegasus platform delivers the robustness and performance necessary for the G3’s fully autonomous capabilities.

Moving Forward as the World Shelters in Place

As the COVID-19 pandemic continues to limit the way people live and work, transportation must adapt to keep the world moving.

In addition to the UV-C lights, Voyage has also equipped the car with HEPA-certified air filters to ensure safe airflow inside the car. The startup uses its own employees to manage and operate the fleet, enacting strict contact tracing and temperature checks to help minimize virus spread.

The Voyage G3 is equipped with a UV-C light system to disinfect the vehicle between rides.

While these measures are in place to specifically protect against the COVID-19 virus, they demonstrate the importance of an autonomous vehicle as a place where passengers can feel safe. No matter the condition of the world, autonomous transportation translates to a worry-free voyage, every time.

The post Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics to Millions

Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics to Millions

Only a dream just a few years ago, real-time ray tracing has become the new reality in graphics because of NVIDIA RTX — and it’s just getting started.

The world’s top gaming franchises, the most popular gaming engines and scores of creative applications across industries are all onboard for real-time ray tracing.

Leading studios, design firms and industry luminaries are using real-time ray tracing to advance content creation and drive new possibilities in graphics, including virtual productions for television, interactive virtual reality experiences, and realistic digital humans and animations.

The Future Group and Riot Games used NVIDIA RTX to deliver the world’s first ray-traced broadcast. Rob Legato, the VFX supervisor for Disney’s recent remake of The Lion King, referred to real-time rendering with GPUs serving as the future of creativity. And developers have adopted real-time techniques to create cinematic video game graphics, like ray-traced reflections in Battlefield V, ray-traced shadows in Shadow of the Tomb Raider and path-traced lighting in Minecraft.

These are just a few of many examples.

In early 2018, ILMxLAB, Epic Games and NVIDIA released a cinematic called Star Wars: Reflections. We revealed that the demo was rendered in real time using ray-traced reflections, area light shadows and ambient occlusion — all on a $70,000 NVIDIA DGX workstation packed with four NVIDIA Volta GPUs. This major advancement captured global attention, as real-time ray tracing with this level of fidelity could only be done offline on gigantic server farms.

Fast forward to August 2018, when we announced the GeForce RTX 2080 Ti at Gamescom and showed Reflections running on just one $1,200 GeForce RTX GPU, with the NVIDIA Turing architecture’s RT Cores accelerating ray tracing performance in real time.

Today, over 50 content creation and design applications, including 20 of the leading commercial renderers, have added support for NVIDIA RTX. Real-time ray tracing is more widely available, allowing professionals to have more time for iterating designs and capturing accurate lighting, shadows, reflections, translucence, scattering and ambient occlusion in their images.

RTX Ray Tracing Continues to Change the Game

From product and building designs to visual effects and animation, real-time ray tracing is revolutionizing content creation. RTX allows creative decisions to be made sooner, as designers no longer need to play the waiting game for renders to complete.

Image courtesy of The Future Group.

What was once considered impossible just two years ago has now become a reality for anyone with an RTX GPU — NVIDIA’s Turing architecture delivers new capabilities that made real-time ray tracing achievable. Its RT Cores accelerate two of the most computationally intensive tasks: bounding volume hierarchy traversal and ray-triangle intersection testing. This allows the streaming multiprocessors, which perform the computations, to improve programmable shading instead of spending thousands of instruction slots for each ray cast.

Turing’s Tensor Cores enable users to leverage and enhance AI denoising for generating clean images quickly. All of these new features combined are what make real-time ray tracing possible. Creative teams can render images faster, complete more iterations and finish projects with cinematic, photorealistic graphics.

“Ray tracing, especially real-time ray tracing, brings the ground truth to an image and allows the viewer to make immediate, sometimes subconscious decisions about the information,” said Jon Peddie, president of Jon Peddie Research. “If it’s entertainment, the viewer is not distracted and taken out of the story by artifacts and nagging suspension of belief. If it’s engineering, the user knows the results are accurate and can move closer and more quickly to a solution.”

Artists can now use a single GPU for real-time ray tracing to create high-quality imagery, and they can harness the power of RTX through numerous ways. Popular game engines Unity and Unreal Engine are leveraging RTX. GPU renderers like V-Ray, Redshift and Octane are adopting OptiX for RTX acceleration. And workstation vendors like BOXX, Dell, HP, Lenovo and Supermicro offer real-time ray tracing-capable systems, allowing users to harness the power of RTX in a single, flexible desktop or mobile workstation.

RTX GPUs also provide the memory required for handling massive datasets, whether it’s complex geometry or large numbers of high-resolution textures. The NVIDIA Quadro RTX 8000 GPU provides a 48GB frame buffer, and with NVLink high-speed interconnect technology doubling that capacity, users can easily manipulate massive, complex scenes without spending time constantly decimating or optimizing their datasets.

“DNEG’s virtual production department has taken on an ever increasing amount of work, particularly over recent months where practical shoots have become more difficult,” said Stephen Willey, head of technology at DNEG. “NVIDIA’s RTX and Quadro Sync solutions, coupled with Epic’s Unreal Engine, have allowed us to create far larger and more realistic real-time scenes and assets. These advances help us offer exciting new possibilities to our clients.”

More recently, NVIDIA introduced techniques to further improve ray tracing and rendering. With Deep Learning Super Sampling, users can enhance real-time rendering through AI-based super resolution. NVIDIA DLSS allows them to render fewer pixels and use AI to construct sharp, higher-resolution images.

At SIGGRAPH this month, one of our research papers dives deep into how to render dynamic direct lighting and shadows from millions of area lights in real time using a new technique called reservoir-based spatiotemporal importance resampling, or ReSTIR.

Image courtesy of Digital Domain.

Real-Time Ray Tracing Opens New Possibilities for Graphics

RTX ray tracing is transforming design across industries today.

In gaming, the quality of RTX ray tracing creates new dynamics and environments in gameplay, allowing players to use reflective surfaces strategically. For virtual reality, RTX ray tracing brings new levels of realism and immersiveness for professionals in healthcare, AEC and automotive design. And in animation, ray tracing is changing the pipeline completely, enabling artists to easily manage and manipulate light geometry in real time.

Real-time ray tracing is also paving the way for virtual productions and believable digital humans in film, television and immersive experiences like VR and AR.

And with NVIDIA Omniverse — the first real-time ray tracer that can scale to any number of GPUs — creatives can simplify collaborative studio workflows with their favorite applications like Unreal Engine, Autodesk Maya and 3ds Max, Substance Painter by Adobe, Unity, SideFX Houdini, and many others. Omniverse is pushing ray tracing forward, enabling users to create visual effects, architectural visualizations and manufacturing designs with dynamic lighting and physically based materials.

Explore the Latest in Ray Tracing and Graphics

Join us at the SIGGRAPH virtual conference to learn more about the latest advances in graphics, and get an exclusive look at some of our most exciting work.

Be part of the NVIDIA community and show us what you can create by participating in our real-time ray tracing contest. The selected winner will receive the latest Quadro RTX graphics card and a free pass to discover what’s new in graphics at NVIDIA GTC, October 5-9.

The post Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics to Millions appeared first on The Official NVIDIA Blog.

Read More

AI in Action: NVIDIA Showcases New Research, Enhanced Tools for Creators at SIGGRAPH

AI in Action: NVIDIA Showcases New Research, Enhanced Tools for Creators at SIGGRAPH

The future of graphics is here, and AI is leading the way.

At the SIGGRAPH 2020 virtual conference, NVIDIA is showcasing advanced AI technologies that allow artists to elevate storytelling and create stunning, photorealistic environments like never before.

NVIDIA tools and software are behind the many AI-enhanced features being added to creative tools and applications, powering denoising capabilities, accelerating 8K editing workflows, enhancing material creation and more.

Get an exclusive look at some of our most exciting work, including the new NanoVDB library that boosts workflows for visual effects. And check out our groundbreaking research,  AI-powered demos, and speaking sessions to explore the newest possibilities in real-time ray tracing and AI.

NVIDIA Extends OpenVDB with New NanoVDB

OpenVDB is the industry-standard library used by VFX studios for simulating water, fire, smoke, clouds and other effects. As part of its collaborative effort to advance open source software in the motion picture and media industries, the Academy Software Foundation (ASWF) recently announced GPU-acceleration in OpenVDB with the new NanoVDB for faster performance and easier development.

OpenVDB provides a hierarchical data structure and related functions to help with calculating volumetric effects in graphic applications. NanoVDB adds GPU support for the native VDB data structure, which is the foundation for OpenVDB.

With NanoVBD, users can leverage GPUs to accelerate workflows such as ray tracing, filtering and collision detection while maintaining compatibility with OpenVDB. NanoVDB is a bridge between an existing OpenVDB workflow and GPU-accelerated rendering or simulation involving static sparse volumes.

Hear what some partners have been saying about NanoVDB.

“With NanoVDB being added to the upcoming Houdini 18.5 release, we’ve moved the static collisions of our Vellum Solver and the sourcing of our Pyro Solver over to the GPU, giving artists the performance and more fluid experience they crave,” said Jeff Lait, senior mathematician at SideFX.

“ILM has been an early adopter of GPU technology in simulating and rendering dense volumes,” said Dan Bailey, senior software engineer at ILM. “We are excited that the ASWF is going to be the custodian of NanoVDB and now that it offers an efficient sparse volume implementation on the GPU. We can’t wait to try this out in production.”

“After spending just a few days integrating NanoVDB into an unoptimized ray marching prototype of our next generation renderer, it still delivered an order of magnitude improvement on the GPU versus our current CPU-based RenderMan/RIS OpenVDB reference,” said Julian Fong, principal software engineer at Pixar. “We anticipate that NanoVDB will be part of the GPU-acceleration pipeline in our next generation multi-device renderer, RenderMan XPU.”

Learn more about NanoVDB.

Research Takes the Spotlight

During the SIGGRAPH conference, NVIDIA Research and collaborators will share advanced techniques in real-time ray tracing, along with other breakthroughs in graphics and design.

Learn about a new algorithm that allows artists to efficiently render direct lighting from millions of dynamic light sources. Explore a new world of color through nonlinear color triads, which are an extension of gradients that enable artists to enhance image editing and compression.

And hear from leading experts across the industry as they share insights about the future of design:

Check out all the groundbreaking research and presentations from NVIDIA.

Eye-Catching Demos You Can’t Miss

This year at SIGGRAPH, NVIDIA demos will showcase how AI-enhanced tools and GPU-powered simulations are leading a new era of content creation:

  • Synthesized high-resolution images with StyleGAN2: Developed by NVIDIA Research, StyleGAN uses transfer learning to produce portraits in a variety of painting styles.
  • Mars lander simulation: A high-resolution simulation of retropropulsion is used by NASA scientists to plan how to control the speed and orientation of vehicles under different landing conditions.
  • AI denoising on Blender: RTX AI features like OptiX Denoiser enhances rendering to deliver an interactive ray-tracing experience.
  • 8K video editing on RTX Studio laptops: GPU acceleration for advanced video editing and visual effects, including AI-based features in DaVinci Resolve, helping editors produce high-quality video and iterate faster.

Check out all the NVIDIA demos and sessions at SIGGRAPH.

More Exciting Graphics News to Come at GTC

The breakthroughs and innovation doesn’t stop here. Register now to explore more of the latest NVIDIA tools and technologies at GTC, October 5-9.

The post AI in Action: NVIDIA Showcases New Research, Enhanced Tools for Creators at SIGGRAPH appeared first on The Official NVIDIA Blog.

Read More

Need Healthcare? AI Startup Curai Has an App for That

Need Healthcare? AI Startup Curai Has an App for That

As a child, Neal Khosla became engrossed by the Oakland Athletics baseball team’s “Moneyball” approach of using data analytics to uncover the value and potential of the sport’s players. A few years ago, the young engineer began pursuing similar techniques to improve medical decision-making.

It wasn’t long after Khosla met Xavier Amatriain, who was looking to apply his engineering skills to a higher mission, that the pair founded Curai. The three-year-old startup, based in Palo Alto, Calif., is using AI to improve the entire process of providing healthcare.

The scope of their challenge — transforming how medical care is accessed and delivered — is daunting. But even modest success could bring huge gains to people’s well-being when one considers that more than half of the world’s population has no access to essential health services, and nearly half of the 400,000 deaths a year attributed to incorrect diagnoses are considered preventable.

“When we think about a world where 8 billion people will need access to high-quality primary care, it’s clear to us that our current system won’t work,” said Khosla, Curai’s CEO. “The accessibility of Google is the level of accessibility we need.”

Curai’s efforts to lower the barrier to entry for healthcare for billions of people center on applying GPU-powered AI to connect patients, providers and health coaches via a chat-based application. Behind the scenes, the app is designed to effectively connect all of the healthcare dots, from understanding symptoms to making diagnoses to determining treatments.

“Healthcare as it is now does not scale. There are not enough doctors in the world, and the situation is not going to get better,” Khosla said. “Our hypothesis is that we can not only scale, but also improve the quality of medicine by automating many parts of the process.”

COVID-19 Fans the Flames

The COVID-19 pandemic has only made Curai’s mission more urgent. With healthcare in the spotlight, there is more momentum than ever to bring more efficiency, accessibility and scale to the industry.

Curai’s platform uses AI and machine learning to automate every part of the process. It’s fronted by the company’s chat-based application, which delivers whatever the user needs.

Patients can use it to input information about their conditions, access their medical profiles, chat with providers 24/7, and see where the process stands.

For providers, it puts a next-generation electronic health record system at their fingertips, where they can access all relevant information about a patient’s care. The app also supports providers by offering diagnostic and treatment suggestions based on Curai’s ever improving algorithms.

“Our approach is to meticulously and carefully log and record data about what the practitioners are doing so we can train models that learn from them,” said Amatriain, chief technology officer at Curai. “We make sure that everything we implement in our platform is designed to improve our ‘learning loop’ – our ability to generate training data that improves our algorithms over time.”

Curai’s main areas of AI focus have been natural language processing (for extracting data from medical conversations), medical reasoning (for providing diagnosis and treatment recommendations) and image processing and classification (largely for dermatology images uploaded by patients).

Across all of these areas, Curai is tapping state-of-the-art techniques like using synthetic data in combination with natural data to train its deep neural networks.

Curai online assessment tool
Curai online assessment tool.

Most of Curai’s experimentation, and much of its model training, occurs on two custom Supermicro workstations each running two NVIDIA TITAN XP GPUs. For its dermatology image classification, Curai initialized a 50-layer convolutional neural network with 23,000 images. For its diagnostic models, the company trained a model on 400,000 simulated medical cases using a CNN. Finally, it trained a class of neural network known as a multilayer perceptron using electronic health records from nearly 80,000 patients.

Curai has occasionally turned to a combination of the Google Cloud Platform and Amazon Web Services to access larger compute capabilities, such as using a doubly fine-tuned BERT model for working out medical question similarities. This used 363,000 text training examples from its own service, with training occurring on two NVIDIA V100 Tensor Core GPUs.

Ready to Scale

There’s still much work to be done on the platform, but Amatriain believes Curai is ready to scale. The company is a premier member of NVIDIA Inception, a program that enables companies working in AI and data science with fundamental tools, expertise and marketing support to help them get to market faster.

Curai plans to finalize its go-to-market strategy over the coming months, and is currently focused on continued training of text- and image-based models, which are good fits for a chat setting. But Amatriain also made it clear that Curai has every intention of bringing sensors, wearable technology and other sources of data into its loop.

In Curai’s view, more data will yield a better solution, and a better solution is the best outcome for patients and providers alike.

“In five years, we see ourselves serving millions of people around the world, and providing them with great-quality, affordable healthcare,” said Amatriain. “We feel that we not only have the opportunity, but also the responsibility, to make this work.”

The post Need Healthcare? AI Startup Curai Has an App for That appeared first on The Official NVIDIA Blog.

Read More

On Becoming Green: 800+ Interns Enliven Our First-Ever Virtual Internship Program

On Becoming Green: 800+ Interns Enliven Our First-Ever Virtual Internship Program

More than 800 students from over 100 universities around the world joined NVIDIA as the first class of our virtual internship program — I’m one of them, working on the corporate communications team this summer.

Shortly after the pandemic’s onset, NVIDIA decided to reinvent its internship program as a virtual one. We’ve been gaining valuable experience and having a lot of fun — all through our screens.

Fellow interns have contributed ideas to teams ranging from robotics to financial reporting. I’ve been writing stories on how cutting-edge tech improves industries from healthcare to animation, learning to work the backend of the company newsroom, and fostering close relationships with some fabulous colleagues.

And did I mention fun? Game show and cook-along events, a well-being panel series and gatherings such as book clubs were part of the programming. We also had several swag bags sent to our doorsteps, which included a customized intern company sweatshirt and an NVIDIA SHIELD TV.

Meet a few other interns who joined the NVIDIA family this year:

Amevor Aids Artists by Using Deep Learning

Christoph Amevor just graduated with a bachelor’s in computational sciences and engineering from ETH Zurich in Switzerland.

At NVIDIA, he’s working on a variety of deep learning projects including one to simplify the workflow of artists and creators using NVIDIA Omniverse, a real-time simulation platform for 3D production pipelines.

“Machine learning is such a powerful tool, and I’ve been interested in seeing how it can help us solve problems that are simply too complex to tackle with analytic math,” Amevor said.

He lives with another NVIDIA intern, which he said has made working from home feel like a mini company location.

Santos Shows Robots the Ropes

Beatriz Santos is an undergrad at California State University, East Bay, studying computer science. She’s a software intern working on the Isaac platform for robotics.

Though the pandemic has forced her to social distance from other humans, Santos has been spending a lot of time with the robot Kaya, in simulation, training it to do various tasks.

Her favorite virtual event this summer was the women’s community panel featuring female leaders at NVIDIA.

“I loved their inputs on working in a historically male-dominated field, and how they said we don’t have to change because of that,” she said. “We can just be ourselves, be girls.”

Sindelar Sharpens Websites

When researching potential summer internships, Justin Sindelar — a marketing major at San Jose State University — was immediately drawn to NVIDIA’s.

“The NVIDIA I once knew as a consumer graphics card company has grown into a multifaceted powerhouse that serves several high-tech industries and has contributed to the proliferation of AI,” he said.

Using the skills he’s learned at school and as a web designer, Sindelar has been performing UX analyses to help improve NVIDIA websites and their accessibility features.

His favorite intern activity was the game show event where he teamed up with his manager and mentors in the digital marketing group to answer trivia questions and fill in movie quotes.

Zhang Zaps Apps Into Shape

Maggie Zhang is a third-year biomedical engineering student at the University of Waterloo in Ontario. She works on the hardware infrastructure team to make software applications that improve workflow for hardware engineers.

When not coding or testing a program, she’s enjoyed online coffee chats, where she formed an especially tight bond with other Canadian interns.

She also highlighted how thankful she is for her team lead and mentor, who set up frequent one-on-one check-ins and taught her new concepts to improve code and make programs more manageable.

“They’ve taught me to be brave, experiment and learn as I go,” she said. “It’s more about what you learn than what you already know.”

For many interns, this fulfilling and challenging summer will lead to future roles at NVIDIA.

Learn more about NVIDIA’s internship program.

The post On Becoming Green: 800+ Interns Enliven Our First-Ever Virtual Internship Program appeared first on The Official NVIDIA Blog.

Read More

Starry, Starry Night: AI-Based Camera System Discovers Two New Meteor Showers

Starry, Starry Night: AI-Based Camera System Discovers Two New Meteor Showers

Spotting a meteor flash across the sky is a rare event for most people, unless you’re the operators of the CAMS meteor shower surveillance project, who frequently spot more than a thousand in a single night and recently discovered two new showers.

CAMS, which stands for Cameras for Allsky Meteor Surveillance, was founded in 2010. Since 2017, it’s been improved by researchers using AI at the Frontier Development Lab, in partnership with NASA and the SETI Institute.

The project uses AI to identify whether a point of light moving in the night sky is a bird, plane, satellite or, in fact, a meteor. The CAMS network consists of cameras that take pictures of the sky, at a rate of 60 frames per second.

The AI pipeline also verifies the findings to confirm the direction from which meteoroids, small pieces of comets that cause meteors, approach the Earth. The project’s AI model training is optimized on NVIDIA TITAN GPUs housed at the SETI Institute.

Each night’s meteor sightings are then mapped onto the NASA meteor shower portal, a visualization tool available to the public. All meteor showers identified since 2010 are available on the portal.

CAMS detected two new meteor showers in mid-May, called the gamma Piscis Austrinids and the sigma Phoenicids. They were added to the International Astronomical Union’s meteor data center, which has recorded 1,041 unique meteor showers to date.

Analysis found both showers to be caused by meteoroids from long-period comets, which take more than 200 years to complete an orbit around the sun.

Improving the Meteor Classification Process

Peter Jenniskens, principal investigator for CAMS, has been classifying meteors since he founded the project in 2010. Before having access to NVIDIA’s GPUs, Jenniskens would look at the images these cameras collected and judge by eye if a light curve from a surveyed object fit the categorization for a meteor.

Now, the CAMS pipeline is entirely automated, from the transferring of data from an observatory to the SETI Institute’s server, to analyzing the findings and displaying them on the online portal on a nightly basis.

With the help of AI, researchers have been able to expand the project and focus on its real-world impact, said Siddha Ganju, a solutions architect at NVIDIA and member of FDL’s AI technical steering committee who worked on the CAMS project.

“The goal of studying space is to figure out the unknowns of the unknowns,” said Ganju. “We want to know what we aren’t yet able to know. Access to data, instruments and computational power is the holy trifecta available today to make discoveries that would’ve been impossible 50 years ago.”

Public excitement around the CAMS network has spurred it to expand the number of cameras fourfold since the project began incorporating AI in 2017. With stations all over the world, from Namibia to the Netherlands, the project now hunts for one-hour long meteor showers, which are only visible in a small part of the world at a given time.

Applying the Information Gathered

The AI model, upon identifying a meteor, calculates the direction it’s coming from. According to Jenniskens, meteors come in groups, called meteoroid streams, which are mostly caused by comets. A comet can approach from as far as Jupiter or Saturn, he said, and when it’s that far away, it’s impossible to see until it comes closer to Earth.

The project’s goal is to enable astronomers to look along the path of an approaching comet and provide enough time to figure out the potential impact it may have on Earth.

Mapping out all discoverable meteor showers brings us a step closer to figuring out what the entire solar system looks like, said Ganju, which is crucial to identifying the potential dangers of comets.

But this map, NASA’s meteor shower portal, isn’t just for professional use. The visualization tool was made available online with the goal of “democratizing science for citizens and fostering interest in the project,” according to Ganju. Anyone can use it to find out what meteor showers are visible each night.

Check out a timeline of notable CAMS discoveries.

The post Starry, Starry Night: AI-Based Camera System Discovers Two New Meteor Showers appeared first on The Official NVIDIA Blog.

Read More