Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available

Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available

With NVIDIA SHIELD TV, there’s always more to love.

Today’s software update — SHIELD Software Experience Upgrade 8.2 — is the 25th for owners of the original SHIELD TV. It’s a remarkable run, spanning more than 5 years since the first SHIELD TVs launched in May 2015.

The latest upgrade brings a host of new features and improvements for daily streamers and media enthusiasts.

Stream On

One of the fan-favorite features for the newest SHIELD TVs is the AI upscaler. It works by training a neural network model on countless images. Deployed on 2019 SHIELD TVs, the AI model can then take low-resolution video and produce incredible sharpness and enhanced details no traditional scaler can recreate. Edges look sharper. Hair looks scruffier. Landscapes pop with striking clarity.

To see the difference between “basic upscaling” and “AI-enhanced upscaling” on SHIELD, click the image below and move the slider left and right.

Today’s upgrade adds more UHD 4K upscaling support from 360p to 1440p content. And on 2019 SHIELD TV Pros, we added support for 60fps content. Now SHIELD can upscale live sports on HD TV and HD video from YouTube to 4K with AI. In the weeks ahead, following an update to the NVIDIA Games app in September, we’ll add 4K 60fps upscaling to GeForce NOW.

The customizable menu button on the new SHIELD remote is another popular addition to the family. It’s getting two more actions to customize.

In addition to an action assigned to a single press, users can now configure a custom action for double press and long press. With over 25 actions available, the SHIELD remote is now the most customizable remote for streamers. This powerful feature works with all SHIELD TVs and the SHIELD TV app, available on the Google Play Store and iOS App Store.

More to Be Enthusiastic About

We take pride in SHIELD being a streaming media player enthusiasts can be, well, enthusiastic about. With our latest software upgrade, we’re improving our IR and CEC volume control support.

These upgrades include support for digital projectors, and allowing functionality when SHIELD isn’t active. It also adds IR volume control when using the SHIELD TV app, and when you’ve paired your Google Home with SHIELD. The 2019 SHIELD remote adds IR control to change the input source on TVs, AVRs and soundbars.

Additionally, earlier SHIELD generations — both 2015 and 2017 models — now have an option to match the frame rate of displayed content.

We’ve added native SMBv3 support as well, providing faster and more secure connections between PC and SHIELD. SMBv3 now works without requiring a PLEX media server.

With SHIELD, there’s always more to love. Download the latest software upgrade today, and check out the release notes for a complete list of all the new features and improvements.

The post Sterling Support: SHIELD TV’s 25th Software Upgrade Now Available appeared first on The Official NVIDIA Blog.

Read More

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE

Self-driving cars continue to amaze passengers as a truly transformative technology. However, in the time of COVID-19, a self-cleaning car may be even more appealing.

Robotaxi startup Voyage introduced its third-generation vehicle, the G3, this week. The  autonomous vehicle, a Chrysler Pacifica Hybrid minivan retrofitted with self-driving technology, is the company’s first designed to operate without a driver and is equipped with an ambulance-grade ultraviolet light disinfectant system to keep passengers healthy.

The new vehicles use the NVIDIA DRIVE AGX Pegasus compute platform to enable the startup’s self-driving AI for robust perception and planning. The automotive-grade platform delivers safety to the core of Voyage’s autonomous fleet.

Given the enclosed space and the proximity of the driver and passengers, ride-hailing currently poses a major risk in a COVID-19 world. By implementing a disinfecting system alongside driverless technology, Voyage is ensuring self-driving cars will continue to develop as a safer, more efficient alternative to everyday mobility.

The G3 vehicle uses an ultraviolet-C system from automotive supplier GHSP to destroy pathogens in the vehicle between rides. UV-C works by inactivating a pathogen’s DNA, blocking its reproductive cycle. It’s been proven to be up to 99.9 percent effective and is commonly used to sterilize ambulances and hospital rooms.

The G3 is production-ready and currently testing on public roads in San Jose, Calif., with production vehicles planned to come out next year.

G3 Compute Horsepower Takes Off with DRIVE AGX Pegasus

Voyage has been using the NVIDIA DRIVE AGX platform in its previous-generation vehicles to power its Shield automatic emergency braking system.

With the G3, the startup is unleashing the 320 TOPS of performance from NVIDIA DRIVE AGX Pegasus to process sensor data and run diverse and redundant deep neural networks simultaneously for driverless operation. Voyage’s onboard computers are automotive grade and safety certified, built to handle the harsh vehicle environment for safe daily operation.

NVIDIA DRIVE AGX Pegasus delivers the compute necessary for level 4 and level 5 autonomous driving.

DRIVE AGX Pegasus is built on two NVIDIA Xavier systems-on-a-chip. Xavier is the first SoC built for autonomous machines and was recently determined by global safety agency TÜV SÜD to meet all applicable requirements of ISO 26262. This stringent assessment means it meets the strictest standard for functional safety.

Xavier’s safety architecture combined with the AI compute horsepower of the DRIVE AGX Pegasus platform delivers the robustness and performance necessary for the G3’s fully autonomous capabilities.

Moving Forward as the World Shelters in Place

As the COVID-19 pandemic continues to limit the way people live and work, transportation must adapt to keep the world moving.

In addition to the UV-C lights, Voyage has also equipped the car with HEPA-certified air filters to ensure safe airflow inside the car. The startup uses its own employees to manage and operate the fleet, enacting strict contact tracing and temperature checks to help minimize virus spread.

The Voyage G3 is equipped with a UV-C light system to disinfect the vehicle between rides.

While these measures are in place to specifically protect against the COVID-19 virus, they demonstrate the importance of an autonomous vehicle as a place where passengers can feel safe. No matter the condition of the world, autonomous transportation translates to a worry-free voyage, every time.

The post Safe Travels: Voyage Intros Ambulance-Grade, Self-Cleaning Driverless Vehicle Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics to Millions

Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics to Millions

Only a dream just a few years ago, real-time ray tracing has become the new reality in graphics because of NVIDIA RTX — and it’s just getting started.

The world’s top gaming franchises, the most popular gaming engines and scores of creative applications across industries are all onboard for real-time ray tracing.

Leading studios, design firms and industry luminaries are using real-time ray tracing to advance content creation and drive new possibilities in graphics, including virtual productions for television, interactive virtual reality experiences, and realistic digital humans and animations.

The Future Group and Riot Games used NVIDIA RTX to deliver the world’s first ray-traced broadcast. Rob Legato, the VFX supervisor for Disney’s recent remake of The Lion King, referred to real-time rendering with GPUs serving as the future of creativity. And developers have adopted real-time techniques to create cinematic video game graphics, like ray-traced reflections in Battlefield V, ray-traced shadows in Shadow of the Tomb Raider and path-traced lighting in Minecraft.

These are just a few of many examples.

In early 2018, ILMxLAB, Epic Games and NVIDIA released a cinematic called Star Wars: Reflections. We revealed that the demo was rendered in real time using ray-traced reflections, area light shadows and ambient occlusion — all on a $70,000 NVIDIA DGX workstation packed with four NVIDIA Volta GPUs. This major advancement captured global attention, as real-time ray tracing with this level of fidelity could only be done offline on gigantic server farms.

Fast forward to August 2018, when we announced the GeForce RTX 2080 Ti at Gamescom and showed Reflections running on just one $1,200 GeForce RTX GPU, with the NVIDIA Turing architecture’s RT Cores accelerating ray tracing performance in real time.

Today, over 50 content creation and design applications, including 20 of the leading commercial renderers, have added support for NVIDIA RTX. Real-time ray tracing is more widely available, allowing professionals to have more time for iterating designs and capturing accurate lighting, shadows, reflections, translucence, scattering and ambient occlusion in their images.

RTX Ray Tracing Continues to Change the Game

From product and building designs to visual effects and animation, real-time ray tracing is revolutionizing content creation. RTX allows creative decisions to be made sooner, as designers no longer need to play the waiting game for renders to complete.

Image courtesy of The Future Group.

What was once considered impossible just two years ago has now become a reality for anyone with an RTX GPU — NVIDIA’s Turing architecture delivers new capabilities that made real-time ray tracing achievable. Its RT Cores accelerate two of the most computationally intensive tasks: bounding volume hierarchy traversal and ray-triangle intersection testing. This allows the streaming multiprocessors, which perform the computations, to improve programmable shading instead of spending thousands of instruction slots for each ray cast.

Turing’s Tensor Cores enable users to leverage and enhance AI denoising for generating clean images quickly. All of these new features combined are what make real-time ray tracing possible. Creative teams can render images faster, complete more iterations and finish projects with cinematic, photorealistic graphics.

“Ray tracing, especially real-time ray tracing, brings the ground truth to an image and allows the viewer to make immediate, sometimes subconscious decisions about the information,” said Jon Peddie, president of Jon Peddie Research. “If it’s entertainment, the viewer is not distracted and taken out of the story by artifacts and nagging suspension of belief. If it’s engineering, the user knows the results are accurate and can move closer and more quickly to a solution.”

Artists can now use a single GPU for real-time ray tracing to create high-quality imagery, and they can harness the power of RTX through numerous ways. Popular game engines Unity and Unreal Engine are leveraging RTX. GPU renderers like V-Ray, Redshift and Octane are adopting OptiX for RTX acceleration. And workstation vendors like BOXX, Dell, HP, Lenovo and Supermicro offer real-time ray tracing-capable systems, allowing users to harness the power of RTX in a single, flexible desktop or mobile workstation.

RTX GPUs also provide the memory required for handling massive datasets, whether it’s complex geometry or large numbers of high-resolution textures. The NVIDIA Quadro RTX 8000 GPU provides a 48GB frame buffer, and with NVLink high-speed interconnect technology doubling that capacity, users can easily manipulate massive, complex scenes without spending time constantly decimating or optimizing their datasets.

“DNEG’s virtual production department has taken on an ever increasing amount of work, particularly over recent months where practical shoots have become more difficult,” said Stephen Willey, head of technology at DNEG. “NVIDIA’s RTX and Quadro Sync solutions, coupled with Epic’s Unreal Engine, have allowed us to create far larger and more realistic real-time scenes and assets. These advances help us offer exciting new possibilities to our clients.”

More recently, NVIDIA introduced techniques to further improve ray tracing and rendering. With Deep Learning Super Sampling, users can enhance real-time rendering through AI-based super resolution. NVIDIA DLSS allows them to render fewer pixels and use AI to construct sharp, higher-resolution images.

At SIGGRAPH this month, one of our research papers dives deep into how to render dynamic direct lighting and shadows from millions of area lights in real time using a new technique called reservoir-based spatiotemporal importance resampling, or ReSTIR.

Image courtesy of Digital Domain.

Real-Time Ray Tracing Opens New Possibilities for Graphics

RTX ray tracing is transforming design across industries today.

In gaming, the quality of RTX ray tracing creates new dynamics and environments in gameplay, allowing players to use reflective surfaces strategically. For virtual reality, RTX ray tracing brings new levels of realism and immersiveness for professionals in healthcare, AEC and automotive design. And in animation, ray tracing is changing the pipeline completely, enabling artists to easily manage and manipulate light geometry in real time.

Real-time ray tracing is also paving the way for virtual productions and believable digital humans in film, television and immersive experiences like VR and AR.

And with NVIDIA Omniverse — the first real-time ray tracer that can scale to any number of GPUs — creatives can simplify collaborative studio workflows with their favorite applications like Unreal Engine, Autodesk Maya and 3ds Max, Substance Painter by Adobe, Unity, SideFX Houdini, and many others. Omniverse is pushing ray tracing forward, enabling users to create visual effects, architectural visualizations and manufacturing designs with dynamic lighting and physically based materials.

Explore the Latest in Ray Tracing and Graphics

Join us at the SIGGRAPH virtual conference to learn more about the latest advances in graphics, and get an exclusive look at some of our most exciting work.

Be part of the NVIDIA community and show us what you can create by participating in our real-time ray tracing contest. The selected winner will receive the latest Quadro RTX graphics card and a free pass to discover what’s new in graphics at NVIDIA GTC, October 5-9.

The post Real-Time Ray Tracing Realized: RTX Brings the Future of Graphics to Millions appeared first on The Official NVIDIA Blog.

Read More

AI in Action: NVIDIA Showcases New Research, Enhanced Tools for Creators at SIGGRAPH

AI in Action: NVIDIA Showcases New Research, Enhanced Tools for Creators at SIGGRAPH

The future of graphics is here, and AI is leading the way.

At the SIGGRAPH 2020 virtual conference, NVIDIA is showcasing advanced AI technologies that allow artists to elevate storytelling and create stunning, photorealistic environments like never before.

NVIDIA tools and software are behind the many AI-enhanced features being added to creative tools and applications, powering denoising capabilities, accelerating 8K editing workflows, enhancing material creation and more.

Get an exclusive look at some of our most exciting work, including the new NanoVDB library that boosts workflows for visual effects. And check out our groundbreaking research,  AI-powered demos, and speaking sessions to explore the newest possibilities in real-time ray tracing and AI.

NVIDIA Extends OpenVDB with New NanoVDB

OpenVDB is the industry-standard library used by VFX studios for simulating water, fire, smoke, clouds and other effects. As part of its collaborative effort to advance open source software in the motion picture and media industries, the Academy Software Foundation (ASWF) recently announced GPU-acceleration in OpenVDB with the new NanoVDB for faster performance and easier development.

OpenVDB provides a hierarchical data structure and related functions to help with calculating volumetric effects in graphic applications. NanoVDB adds GPU support for the native VDB data structure, which is the foundation for OpenVDB.

With NanoVBD, users can leverage GPUs to accelerate workflows such as ray tracing, filtering and collision detection while maintaining compatibility with OpenVDB. NanoVDB is a bridge between an existing OpenVDB workflow and GPU-accelerated rendering or simulation involving static sparse volumes.

Hear what some partners have been saying about NanoVDB.

“With NanoVDB being added to the upcoming Houdini 18.5 release, we’ve moved the static collisions of our Vellum Solver and the sourcing of our Pyro Solver over to the GPU, giving artists the performance and more fluid experience they crave,” said Jeff Lait, senior mathematician at SideFX.

“ILM has been an early adopter of GPU technology in simulating and rendering dense volumes,” said Dan Bailey, senior software engineer at ILM. “We are excited that the ASWF is going to be the custodian of NanoVDB and now that it offers an efficient sparse volume implementation on the GPU. We can’t wait to try this out in production.”

“After spending just a few days integrating NanoVDB into an unoptimized ray marching prototype of our next generation renderer, it still delivered an order of magnitude improvement on the GPU versus our current CPU-based RenderMan/RIS OpenVDB reference,” said Julian Fong, principal software engineer at Pixar. “We anticipate that NanoVDB will be part of the GPU-acceleration pipeline in our next generation multi-device renderer, RenderMan XPU.”

Learn more about NanoVDB.

Research Takes the Spotlight

During the SIGGRAPH conference, NVIDIA Research and collaborators will share advanced techniques in real-time ray tracing, along with other breakthroughs in graphics and design.

Learn about a new algorithm that allows artists to efficiently render direct lighting from millions of dynamic light sources. Explore a new world of color through nonlinear color triads, which are an extension of gradients that enable artists to enhance image editing and compression.

And hear from leading experts across the industry as they share insights about the future of design:

Check out all the groundbreaking research and presentations from NVIDIA.

Eye-Catching Demos You Can’t Miss

This year at SIGGRAPH, NVIDIA demos will showcase how AI-enhanced tools and GPU-powered simulations are leading a new era of content creation:

  • Synthesized high-resolution images with StyleGAN2: Developed by NVIDIA Research, StyleGAN uses transfer learning to produce portraits in a variety of painting styles.
  • Mars lander simulation: A high-resolution simulation of retropropulsion is used by NASA scientists to plan how to control the speed and orientation of vehicles under different landing conditions.
  • AI denoising on Blender: RTX AI features like OptiX Denoiser enhances rendering to deliver an interactive ray-tracing experience.
  • 8K video editing on RTX Studio laptops: GPU acceleration for advanced video editing and visual effects, including AI-based features in DaVinci Resolve, helping editors produce high-quality video and iterate faster.

Check out all the NVIDIA demos and sessions at SIGGRAPH.

More Exciting Graphics News to Come at GTC

The breakthroughs and innovation doesn’t stop here. Register now to explore more of the latest NVIDIA tools and technologies at GTC, October 5-9.

The post AI in Action: NVIDIA Showcases New Research, Enhanced Tools for Creators at SIGGRAPH appeared first on The Official NVIDIA Blog.

Read More

Need Healthcare? AI Startup Curai Has an App for That

Need Healthcare? AI Startup Curai Has an App for That

As a child, Neal Khosla became engrossed by the Oakland Athletics baseball team’s “Moneyball” approach of using data analytics to uncover the value and potential of the sport’s players. A few years ago, the young engineer began pursuing similar techniques to improve medical decision-making.

It wasn’t long after Khosla met Xavier Amatriain, who was looking to apply his engineering skills to a higher mission, that the pair founded Curai. The three-year-old startup, based in Palo Alto, Calif., is using AI to improve the entire process of providing healthcare.

The scope of their challenge — transforming how medical care is accessed and delivered — is daunting. But even modest success could bring huge gains to people’s well-being when one considers that more than half of the world’s population has no access to essential health services, and nearly half of the 400,000 deaths a year attributed to incorrect diagnoses are considered preventable.

“When we think about a world where 8 billion people will need access to high-quality primary care, it’s clear to us that our current system won’t work,” said Khosla, Curai’s CEO. “The accessibility of Google is the level of accessibility we need.”

Curai’s efforts to lower the barrier to entry for healthcare for billions of people center on applying GPU-powered AI to connect patients, providers and health coaches via a chat-based application. Behind the scenes, the app is designed to effectively connect all of the healthcare dots, from understanding symptoms to making diagnoses to determining treatments.

“Healthcare as it is now does not scale. There are not enough doctors in the world, and the situation is not going to get better,” Khosla said. “Our hypothesis is that we can not only scale, but also improve the quality of medicine by automating many parts of the process.”

COVID-19 Fans the Flames

The COVID-19 pandemic has only made Curai’s mission more urgent. With healthcare in the spotlight, there is more momentum than ever to bring more efficiency, accessibility and scale to the industry.

Curai’s platform uses AI and machine learning to automate every part of the process. It’s fronted by the company’s chat-based application, which delivers whatever the user needs.

Patients can use it to input information about their conditions, access their medical profiles, chat with providers 24/7, and see where the process stands.

For providers, it puts a next-generation electronic health record system at their fingertips, where they can access all relevant information about a patient’s care. The app also supports providers by offering diagnostic and treatment suggestions based on Curai’s ever improving algorithms.

“Our approach is to meticulously and carefully log and record data about what the practitioners are doing so we can train models that learn from them,” said Amatriain, chief technology officer at Curai. “We make sure that everything we implement in our platform is designed to improve our ‘learning loop’ – our ability to generate training data that improves our algorithms over time.”

Curai’s main areas of AI focus have been natural language processing (for extracting data from medical conversations), medical reasoning (for providing diagnosis and treatment recommendations) and image processing and classification (largely for dermatology images uploaded by patients).

Across all of these areas, Curai is tapping state-of-the-art techniques like using synthetic data in combination with natural data to train its deep neural networks.

Curai online assessment tool
Curai online assessment tool.

Most of Curai’s experimentation, and much of its model training, occurs on two custom Supermicro workstations each running two NVIDIA TITAN XP GPUs. For its dermatology image classification, Curai initialized a 50-layer convolutional neural network with 23,000 images. For its diagnostic models, the company trained a model on 400,000 simulated medical cases using a CNN. Finally, it trained a class of neural network known as a multilayer perceptron using electronic health records from nearly 80,000 patients.

Curai has occasionally turned to a combination of the Google Cloud Platform and Amazon Web Services to access larger compute capabilities, such as using a doubly fine-tuned BERT model for working out medical question similarities. This used 363,000 text training examples from its own service, with training occurring on two NVIDIA V100 Tensor Core GPUs.

Ready to Scale

There’s still much work to be done on the platform, but Amatriain believes Curai is ready to scale. The company is a premier member of NVIDIA Inception, a program that enables companies working in AI and data science with fundamental tools, expertise and marketing support to help them get to market faster.

Curai plans to finalize its go-to-market strategy over the coming months, and is currently focused on continued training of text- and image-based models, which are good fits for a chat setting. But Amatriain also made it clear that Curai has every intention of bringing sensors, wearable technology and other sources of data into its loop.

In Curai’s view, more data will yield a better solution, and a better solution is the best outcome for patients and providers alike.

“In five years, we see ourselves serving millions of people around the world, and providing them with great-quality, affordable healthcare,” said Amatriain. “We feel that we not only have the opportunity, but also the responsibility, to make this work.”

The post Need Healthcare? AI Startup Curai Has an App for That appeared first on The Official NVIDIA Blog.

Read More

On Becoming Green: 800+ Interns Enliven Our First-Ever Virtual Internship Program

On Becoming Green: 800+ Interns Enliven Our First-Ever Virtual Internship Program

More than 800 students from over 100 universities around the world joined NVIDIA as the first class of our virtual internship program — I’m one of them, working on the corporate communications team this summer.

Shortly after the pandemic’s onset, NVIDIA decided to reinvent its internship program as a virtual one. We’ve been gaining valuable experience and having a lot of fun — all through our screens.

Fellow interns have contributed ideas to teams ranging from robotics to financial reporting. I’ve been writing stories on how cutting-edge tech improves industries from healthcare to animation, learning to work the backend of the company newsroom, and fostering close relationships with some fabulous colleagues.

And did I mention fun? Game show and cook-along events, a well-being panel series and gatherings such as book clubs were part of the programming. We also had several swag bags sent to our doorsteps, which included a customized intern company sweatshirt and an NVIDIA SHIELD TV.

Meet a few other interns who joined the NVIDIA family this year:

Amevor Aids Artists by Using Deep Learning

Christoph Amevor just graduated with a bachelor’s in computational sciences and engineering from ETH Zurich in Switzerland.

At NVIDIA, he’s working on a variety of deep learning projects including one to simplify the workflow of artists and creators using NVIDIA Omniverse, a real-time simulation platform for 3D production pipelines.

“Machine learning is such a powerful tool, and I’ve been interested in seeing how it can help us solve problems that are simply too complex to tackle with analytic math,” Amevor said.

He lives with another NVIDIA intern, which he said has made working from home feel like a mini company location.

Santos Shows Robots the Ropes

Beatriz Santos is an undergrad at California State University, East Bay, studying computer science. She’s a software intern working on the Isaac platform for robotics.

Though the pandemic has forced her to social distance from other humans, Santos has been spending a lot of time with the robot Kaya, in simulation, training it to do various tasks.

Her favorite virtual event this summer was the women’s community panel featuring female leaders at NVIDIA.

“I loved their inputs on working in a historically male-dominated field, and how they said we don’t have to change because of that,” she said. “We can just be ourselves, be girls.”

Sindelar Sharpens Websites

When researching potential summer internships, Justin Sindelar — a marketing major at San Jose State University — was immediately drawn to NVIDIA’s.

“The NVIDIA I once knew as a consumer graphics card company has grown into a multifaceted powerhouse that serves several high-tech industries and has contributed to the proliferation of AI,” he said.

Using the skills he’s learned at school and as a web designer, Sindelar has been performing UX analyses to help improve NVIDIA websites and their accessibility features.

His favorite intern activity was the game show event where he teamed up with his manager and mentors in the digital marketing group to answer trivia questions and fill in movie quotes.

Zhang Zaps Apps Into Shape

Maggie Zhang is a third-year biomedical engineering student at the University of Waterloo in Ontario. She works on the hardware infrastructure team to make software applications that improve workflow for hardware engineers.

When not coding or testing a program, she’s enjoyed online coffee chats, where she formed an especially tight bond with other Canadian interns.

She also highlighted how thankful she is for her team lead and mentor, who set up frequent one-on-one check-ins and taught her new concepts to improve code and make programs more manageable.

“They’ve taught me to be brave, experiment and learn as I go,” she said. “It’s more about what you learn than what you already know.”

For many interns, this fulfilling and challenging summer will lead to future roles at NVIDIA.

Learn more about NVIDIA’s internship program.

The post On Becoming Green: 800+ Interns Enliven Our First-Ever Virtual Internship Program appeared first on The Official NVIDIA Blog.

Read More

Starry, Starry Night: AI-Based Camera System Discovers Two New Meteor Showers

Starry, Starry Night: AI-Based Camera System Discovers Two New Meteor Showers

Spotting a meteor flash across the sky is a rare event for most people, unless you’re the operators of the CAMS meteor shower surveillance project, who frequently spot more than a thousand in a single night and recently discovered two new showers.

CAMS, which stands for Cameras for Allsky Meteor Surveillance, was founded in 2010. Since 2017, it’s been improved by researchers using AI at the Frontier Development Lab, in partnership with NASA and the SETI Institute.

The project uses AI to identify whether a point of light moving in the night sky is a bird, plane, satellite or, in fact, a meteor. The CAMS network consists of cameras that take pictures of the sky, at a rate of 60 frames per second.

The AI pipeline also verifies the findings to confirm the direction from which meteoroids, small pieces of comets that cause meteors, approach the Earth. The project’s AI model training is optimized on NVIDIA TITAN GPUs housed at the SETI Institute.

Each night’s meteor sightings are then mapped onto the NASA meteor shower portal, a visualization tool available to the public. All meteor showers identified since 2010 are available on the portal.

CAMS detected two new meteor showers in mid-May, called the gamma Piscis Austrinids and the sigma Phoenicids. They were added to the International Astronomical Union’s meteor data center, which has recorded 1,041 unique meteor showers to date.

Analysis found both showers to be caused by meteoroids from long-period comets, which take more than 200 years to complete an orbit around the sun.

Improving the Meteor Classification Process

Peter Jenniskens, principal investigator for CAMS, has been classifying meteors since he founded the project in 2010. Before having access to NVIDIA’s GPUs, Jenniskens would look at the images these cameras collected and judge by eye if a light curve from a surveyed object fit the categorization for a meteor.

Now, the CAMS pipeline is entirely automated, from the transferring of data from an observatory to the SETI Institute’s server, to analyzing the findings and displaying them on the online portal on a nightly basis.

With the help of AI, researchers have been able to expand the project and focus on its real-world impact, said Siddha Ganju, a solutions architect at NVIDIA and member of FDL’s AI technical steering committee who worked on the CAMS project.

“The goal of studying space is to figure out the unknowns of the unknowns,” said Ganju. “We want to know what we aren’t yet able to know. Access to data, instruments and computational power is the holy trifecta available today to make discoveries that would’ve been impossible 50 years ago.”

Public excitement around the CAMS network has spurred it to expand the number of cameras fourfold since the project began incorporating AI in 2017. With stations all over the world, from Namibia to the Netherlands, the project now hunts for one-hour long meteor showers, which are only visible in a small part of the world at a given time.

Applying the Information Gathered

The AI model, upon identifying a meteor, calculates the direction it’s coming from. According to Jenniskens, meteors come in groups, called meteoroid streams, which are mostly caused by comets. A comet can approach from as far as Jupiter or Saturn, he said, and when it’s that far away, it’s impossible to see until it comes closer to Earth.

The project’s goal is to enable astronomers to look along the path of an approaching comet and provide enough time to figure out the potential impact it may have on Earth.

Mapping out all discoverable meteor showers brings us a step closer to figuring out what the entire solar system looks like, said Ganju, which is crucial to identifying the potential dangers of comets.

But this map, NASA’s meteor shower portal, isn’t just for professional use. The visualization tool was made available online with the goal of “democratizing science for citizens and fostering interest in the project,” according to Ganju. Anyone can use it to find out what meteor showers are visible each night.

Check out a timeline of notable CAMS discoveries.

The post Starry, Starry Night: AI-Based Camera System Discovers Two New Meteor Showers appeared first on The Official NVIDIA Blog.

Read More

There’s a Code for That: Hugging Face’s Sam Shleifer Talks Natural Language Processing

There’s a Code for That: Hugging Face’s Sam Shleifer Talks Natural Language Processing

Hugging Face is more than just an adorable emoji — it’s a company that’s demystifying AI by transforming the latest developments in deep learning into usable code for businesses and researchers.

Research engineer Sam Shleifer spoke with AI Podcast host Noah Kravitz about Hugging Face NLP technology, which is in use at over 1,000 companies, including Apple, Bing and Grammarly, across fields ranging from finance to medical technology.

EMBED PODCAST

Hugging Face’s models serve a variety of purposes for their customers, including autocompletion, customer service automation and translation. Their popular web application, Write with Transformer, can even take half-formed thoughts and suggest options for completion.

Schleifer is currently at work developing models that are accessible to everyone, whether they are proficient coders or not.

In the next few years, Schleifer envisions the continued growth of smaller NLP models that power a wave of chat apps with state-of-the-art translation capabilities.

Key Points From This Episode:

  • Hugging Face first launched an original chatbot app, before moving into natural language processing models. The move was well-received, and last year the company announced a $15 million funding round.
  • The company is a member of NVIDIA Inception, a virtual accelerator that Schleifer credits with significantly accelerating their experiments.
  • Hugging Face has released over 1,000 models trained with unsupervised learning and the Open Parallel Corpus project, pioneered by the University of Helsinki. These models are capable of machine translation in a huge variety of languages, even for low-resource languages with minimal training data.

Tweetables:

“We’re trying to make state-of-the-art NLP accessible to everyone who wants to use it, whether they can code or not code.” — Sam Shleifer [1:44]

“Our research is targeted at this NLP accessibility mission — and NLP isn’t really accessible when models can’t fit on a single GPU.” — Sam Shleifer [10:38]

You Might Also Like

Sarcasm Detector Uses AI to Understand People at Their Funniest, Meanest

Dr. Pushpak Bhattacharyya’s work is giving computers the ability to understand one of humanity’s most challenging, and amusing, modes of communication. Bhattacharyya, director of IIT Patna, and a professor at the Computer Science and Engineering Department at IIT Bombay, has spent the past few years using GPU-powered deep learning to detect sarcasm.

Speaking the Same Language: How Oracle’s Conversational AI Serves Customers

At Oracle, customer service chatbots use conversational AI to respond to users with more speed and complexity. Suhas Uliyar, vice president of bots, AI and mobile product management at Oracle, talks about how the newest wave of conversational AI can keep up with the nuances of human conversation.

How Syed Ahmed Taught AI to Translate Sign Language

Syed Ahmed, a research assistant at the National Technical Institute for the Deaf, is directing the power of AI toward another form of communication: American Sign Language. And what Ahmed has done is set up a deep learning model that translates ASL into English.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post There’s a Code for That: Hugging Face’s Sam Shleifer Talks Natural Language Processing appeared first on The Official NVIDIA Blog.

Read More

2 Million Registered Developers, Countless Breakthroughs

2 Million Registered Developers, Countless Breakthroughs

Everyone has problems.

Whether they’re tackling challenges at the cutting edge of physics, trying to tame a worldwide pandemic, or sorting their child’s Lego collection, innovators join NVIDIA’s developer program to help them solve their most challenging problems.

With the number of registered NVIDIA developers having just hit 2 million, NVIDIA developers are pursuing more breakthroughs than ever.

Their ranks continue to grow by larger numbers every year. It took 13 years to reach 1 million registered developers, and less than two more to reach 2 million.

Most recently, teams at the U.S. National Institutes of Health, Scripps Research Institute and Oak Ridge National Laboratory have been among the NVIDIA developers at the forefront of efforts to combat COVID-19.

Every Country, Every Field

No surprise. Whether they’re software programmers, data scientists or devops engineers, developers are problem solvers.

They write, debug and optimize code, often taking a set of software building blocks — frameworks, application programming interfaces and other tools — and putting them to work to do something new.

These developers include business and academic leaders from every region in the world.

In China, Alibaba and Baidu are among the most active GPU developers. In North America, those names include Microsoft, Amazon and Google. In Japan, it’s Sony, Hitachi and Panasonic. In Europe, they include Bosch, Daimler and Siemens.

All the top technical universities are represented, including CalTech, MIT, Oxford, Cambridge, Stanford, Tsinghua University, the University of Tokyo, and IIT campuses throughout India. 

Look beyond the big names — there are too many to drop here — and you’ll find tens of thousands of entrepreneurs, hobbyists and enthusiasts.

Developers are signing up for our developer program to put NVIDIA accelerated computing tools to work across fields such as scientific and high performance computing, graphics and professional visualization, robotics, AI and data science, networking, and autonomous vehicles.

Developers are trained and equipped for success through our GTC conferences, online and in-person tutorials, our Deep Learning Institute training, and technical blogs. We provide them with software development kits such as CUDA, cuDNN, TensorRT and OptiX.

Registered developers account for 100,000 downloads a month, thousands participate each month in DLI training sessions, and thousands more engage in our online forums or attend conferences and webinars.

NVIDIA’s developer program, however, is just a piece of a much bigger developer story. There are now more than a billion CUDA GPUs in the world — each capable of running CUDA-accelerated software — giving developers, hackers and makers a vast installed base to work with.

As a result, the number of downloads of CUDA, which is free, without registration, is far higher than that of registered developers. On average, 39,000 developers sign up for memberships each month and 438,000 download CUDA each month.

That’s an awful lot of problem solvers.

Breakthroughs in Science and Research

The ranks of those who depend on such problem solvers include the team who won the 2017 Nobel Prize in Chemistry — Jacques Dubochet, Joachim Frank and Richard Henderson — for their contribution to cryogenic electron microscopy.

They also include the team that won the 2017 Nobel Prize in Physics — Rainer Weiss, Barry Barish and Kip Thorne — for their work detecting gravitational waves.

More scientific breakthroughs are coming, as developers attack new HPC problems and, increasingly, deep learning.

William Tang, principal research physicist at the Princeton Plasma Physics Laboratory — one of the world’s foremost experts on fusion energy — leads a team using deep learning and HPC to advance the quest for cheap, clean energy.

Michael Kirk and Raphael Attie, scientists at NASA’s Goddard Space Flight Center — are among the many active GPU developers at NASA — relying on Quadro RTX data science workstations to analyze the vast quantities of data streaming in from satellites monitoring the sun.

And at UC Berkeley, astrophysics Ph.D. student Gerry Zhang uses GPU-accelerated deep learning to analyze signals from space for signs of intelligent extraterrestrial civilizations.

Top Companies

Outside of research and academia, developers at the world’s top companies are tackling problems faced by every one of the world’s industries.

At Intuit, Chief Data Officer Ashok Srivastava leads a team using GPU-accelerated machine learning to help consumers with taxes and help small businesses through the financial effects of COVID-19.

At health insurer Anthem, Chief Digital Officer Rajeev Ronanki uses GPU-accelerated AI to help patients personalize and better understand their healthcare information.

Arne Stoschek, head of autonomous systems at Acubed, the Silicon Valley-based advanced products and partnerships outpost of Airbus Group, is developing self-piloted air taxis powered by GPU-accelerated AI.

New Problems, New Businesses: Entrepreneurs Swell Developer Ranks

Other developers — many supported by the NVIDIA Inception program — work at startups building businesses that solve new kinds of problems.

Looking to invest in a genuine pair of vintage Air Jordans? Michael Hall, director of data at GOAT Group, uses GPU-accelerated AI to help the startup connect sneaker enthusiasts with Air Jordans, Yeezys and a variety of old-school kicks that they can be confident are authentic.

Don’t know what to wear? Brad Klingenberg, chief algorithms officer at fashion ecommerce startup Stitch Fix, leads a team that uses GPU-accelerated AI to help us all dress better.

And Benjamin Schmidt, at Roadbotics, offers what might be the ultimate case study in how developers are solving concrete problems: his startup helps cities find and fix potholes.

Entrepreneurs are also supported by NVIDIA’s Inception program, which includes more than 6,000 startups in industries ranging from agriculture to healthcare to logistics to manufacturing.

Of course, just because something’s a problem, doesn’t mean you can’t love solving it.

Love beer? Eric Boucher, a home brewing enthusiast, is using AI to invent new kinds of suds.

Love a critter-free lawn? Robert Bond has trained a system that can detect cats and gently shoo them from his grass by turning on his sprinklers to the amazement and delight of his grandchildren.

Francisco “Paco” Garcia has even trained an AI to help sort out his children’s Lego pile.

Most telling: stories from developers working at the cutting edge of the arts.

Pierre Barreau has created an AI, named AIVA, which uses mathematical models based on the work of great composers to create new music.

And Raiders of the Lost Art — a collaboration between Anthony Bourached and George Cann, a pair of Ph.D. candidates at the University College, London — has used neural style transfer techniques to tease out hidden artwork in a Leonardo da Vinci painting.

Wherever you go, follow the computing power and you’ll find developers delivering breakthroughs.

How big is the opportunity for problem solvers like these? However many problems there are in the world.

Want more stories like these? No problem. Over the months to come, we’ll be bringing as many to you as we can. 

The post 2 Million Registered Developers, Countless Breakthroughs appeared first on The Official NVIDIA Blog.

Read More

Wherever You Go with Chromebook, GeForce NOW Lets You Bring Your Games with You 

Wherever You Go with Chromebook, GeForce NOW Lets You Bring Your Games with You 

Chromebooks, like GeForce NOW, are ready when you are.

With today’s beta launch on ChromeOS, Chromebooks now wield the power to play PC games using GeForce NOW.

Chromebook users join the millions on PC, Mac, SHIELD and Android mobile devices already playing their favorite games on our cloud gaming service with GeForce performance.

Getting started is simple. Head to play.geforcenow.com and log in with your GeForce NOW account. Signing up is easy, just choose either a paid Founders membership or a free account.

Right now is a great time to join. We just launched a six-month Founders membership that includes a Hyper Scape Season One Battle Pass token and exclusive Hyper Scape in-game content for $24.95. That’s a $64.94 value.

Once logged in, you’re only a couple clicks away from streaming a massive catalog of games. For the best experience, you’ll want to make those clicks with a USB mouse.

Distance Learning by Day, Distance Gaming by Night

Some students are heading back to school. Others are distance learning from home. However they’re learning, more students than ever rely on Chromebooks.

That’s because Chromebooks are great computers for studying. They’re fast, simple and secure devices that help you stay productive and connected.

Now, those same Chromebooks transform, instantly, into GeForce-powered distance gaming rigs, thanks to GeForce NOW.

Your Games on All Your Devices

Millions of GeForce NOW members play with and against their friends — no matter which platform they’re streaming on, whether that’s PC, Mac, Android or, now, Chromebooks.

That’s because when you stream games using GeForce NOW, you’re playing the PC version from digital stores like Steam, Epic Games Store and Ubisoft Uplay.

This is great for developers, who can bring their games to the cloud at launch, without adding development cycles.

And it’s great for the millions of GeForce NOW members. They’re tapping into an existing ecosystem anytime they stream one of more than 650 games instantly. That includes over 70 of the most-played free-to-play games.

When games like CD Projekt Red’s Cyberpunk 2077 come out later this year, members will be able to play using GeForce NOW servers the same day on their Chromebook.

Anywhere You Go

Chromebooks, of course, are lightweight devices that go where you do. From home to work to school. Or from your bedroom to the living room.

GeForce NOW is the perfect Chromebook companion. Simply plug in a mouse and go. Our beta release gives Chromebook owners the power to play their favorite PC games.

New to GeForce NOW? Check out our GeForce NOW Quick Start Guide to get gaming instantly.

Take game progress or character level-ups from a desktop to a phone and then onto Chromebook. You’re playing the games you own from your digital game store accounts. So your progress goes with you.

More PC Gaming Features Heading to the Cloud 

The heart of GeForce NOW is PC gaming. We continue to tap into the PC ecosystem to bring more PC features to the cloud.

PC gamers are intimately familiar with Steam. Many have massive libraries from the popular PC game store. To support them, we just launched Steam Game Sync so they can sync games from their Steam library with their library in GeForce NOW. It’s quickly become one of our most popular features for members playing on PC and Mac.

Soon, Chromebook owners will be able to take advantage of the feature, too.

Over the past few months, we’ve added two GeForce Experience features. Highlights delivers automatic video capture so you can share your best moments, and Freestyle provides gamers the ability to customize a game’s look. In the weeks ahead, we’ll add support for Ansel — a powerful in-game camera that lets gamers capture professional-grade screenshots. These features are currently only available on PC and Mac. Look for them to come to Chromebooks in future updates.

More games. More platforms. Legendary GeForce performance. And now on Chromebooks. That’s the power to play that only GeForce NOW can deliver.

The post Wherever You Go with Chromebook, GeForce NOW Lets You Bring Your Games with You  appeared first on The Official NVIDIA Blog.

Read More