AI, Computational Advances Ring In New Era for Healthcare

We’re at a pivotal moment to unlock a new, AI-accelerated era of discovery and medicine, says Kimberly Powell, NVIDIA’s vice president of healthcare.

Speaking today at the J.P. Morgan Healthcare conference, held virtually, Powell outlined how AI and accelerated computing are enabling scientists to take advantage of the boom in biomedical data to power faster research breakthroughs and better patient care.

Understanding disease and discovering therapies is our greatest human endeavor, she said — and the trillion-dollar drug discovery industry illustrates just how complex a challenge it is.

How AI Can Drive Down Drug Discovery Costs

The typical drug discovery process takes about a decade, costs $2 billion and suffers a 90 percent failure rate during clinical development. But the rise of digital data in healthcare in recent years presents an opportunity to improve those statistics with AI.

“We can produce today more biomedical data in about three months than the entire 300-year history of healthcare,” she said. “And so this is now becoming a problem that no human really can synthesize that level of data, and we need to call upon artificial intelligence.”  

Powell called AI “the most powerful technology force of our time. It’s software that writes software that no humans can.”

But AI works best when it’s domain specific, combining data and algorithms tailored to a specific field like radiology, pathology or patient monitoring. The NVIDIA Clara application framework bridges this gap by providing researchers and clinicians the tools for GPU-accelerated AI in medical imaging, genomics, drug discovery and smart hospitals.

Downloads of NVIDIA Clara grew 5x last year, Powell shared, with developers taking up our new platforms for conversational AI and federated learning.

Healthcare Ecosystem Rallies Around AI

She noted that amid the COVID-19 pandemic, momentum around AI for healthcare has accelerated, with startups estimated to have raised well over $5 billion in 2020. More than 1,000 healthcare startups are in the NVIDIA Inception accelerator program, up 4x since 2017. And over 20,000 AI healthcare papers were submitted last year to PubMed, showing exponential growth over the past decade.

Leading research institutions like the University of California, San Francisco, are using NVIDIA GPUs to power their work in cryo-electron microscopy, a technique used to study the structure of molecules — such as the spike proteins on the COVID-19 virus — and accelerate drug and vaccine discovery.

And pharmaceutical companies, including GlaxoSmithKline, and major healthcare systems, like the U.K.’s National Health Service, will harness the Cambridge-1 supercomputer — an NVIDIA DGX SuperPOD system and the U.K.’s fastest AI supercomputer — to solve large-scale problems and improve patient care, diagnosis and delivery of critical medicines and vaccines.

Software-Defined Instruments Link AI Innovation and Medical Practice

Powell sees software-defined instruments — devices that can be regularly updated to reflect the latest scientific understanding and AI algorithms — as key to connecting the latest research breakthroughs with the practice of medicine.

“Artificial intelligence, like the practice of medicine, is constantly learning. We want to learn from the data, we want to learn from the changing environment,” Powell said.

By making medical instruments software-defined, tools like smart cameras for patient monitoring or AI-guided ultrasound systems can not only be developed in the first place, she said, but also retain their value and improve over time.

U.K.-based sequencing company Oxford Nanopore Technologies is a leader in software-defined instruments, deploying a new generation of DNA sequencing technology across an electronics-based platform. Its nanopore sequencing devices have been used in more than 50 countries to sequence and track new variants of the virus that causes COVID-19, as well as for large-scale genomic analyses to study the biology of cancer.

The company uses NVIDIA GPUs to power several of its instruments, from the handheld MinION Mk1C device to its ultra-high throughput PromethION, which can produce more than three human genomes’ worth of sequence data in a single run. To power the next generation of PromethION, Oxford Nanopore is adopting NVIDIA DGX Station, enabling its real-time sequencing technology to pair with rapid and highly accurate genomic analyses.

For years, the company has been using AI to improve the accuracy of basecalling, the process of determining the order of a molecule’s DNA bases from tiny electrical signals that pass through a nanoscale hole, or nanopore.

This technology “truly touches on the entire practice of medicine,” Powell said, whether COVID epidemiology or in human genetics and long read sequencing. “Through deep learning, their base calling model is able to reach an overall accuracy of 98.3 percent, and AI-driven single nucleotide variant calling gets them to 99.9 percent accuracy.”

Path Forward for AI-Powered Healthcare

AI-powered breakthroughs like these have grown in significance amid the pandemic, said Powell.

“The tremendous focus of AI on a single problem in 2020, like COVID-19, really showed us that with that tremendous focus, we can see every piece and part that can benefit from artificial intelligence,” she said. “What we’ve discovered over the last 12 months is only going to propel us further in the future. Everything we’ve learned is applicable for every future drug discovery program there is.”

Across fields as diverse as genome analysis, computational drug discovery and clinical diagnostics, healthcare heavyweights are making strides with GPU-accelerated AI. Hear more about it on Jan. 13 at 11 a.m. Pacific, when Powell joins a Washington Post Live conversation on AI in healthcare.

Subscribe to NVIDIA healthcare news here.

The post AI, Computational Advances Ring In New Era for Healthcare appeared first on The Official NVIDIA Blog.

Read More

Freeze the Day: How UCSF Researchers Clear Up Cryo-EM Images with GPUs

When photographers take long-exposure photos, they maximize the amount of light their camera sensors receive. The technique helps capture scenes like the night sky, but it introduces blurring in the final image, as in the example at right.

It’s not too different from cryo-electron microscopy, or cryo-EM, which scientists use to study the structure of tiny molecules frozen in vitreous ice. But while motion-induced blur in photography can create beautiful images, in structural biology it’s an unwanted side effect.

Protein samples for cryo-EM are frozen at -196 degrees Celcius to protect the biological structures, which would otherwise be destroyed by the microscope’s high-energy electron beam. But even when frozen, samples are disturbed by the powerful electron dose, causing motion that would blur a long-exposure photo.

To get around it, UCSF researchers use specialized cameras to instead capture videos of the biological molecules, so they appear nearly stationary in each frame of the video. Correcting the motion across frames is a computationally demanding task — but can be done in seconds on NVIDIA GPUs.

“If the motion was left uncorrected, we’d lose the high-resolution picture of a molecule’s 3D structures,” said Shawn Zheng, scientific software developer at the University of California, San Francisco and Howard Hughes Medical Institute. “And knowing the structure of a molecule is critical to understanding its function.”

Zheng and his colleagues run MotionCor2, the world’s most widely used motion-correction application, on NVIDIA GPUs to align each molecule in the video from frame to frame — creating a clean image researchers can turn into a 3D model.

These 3D models are essential for scientists to understand the complex chains of interactions taking place in an individual protein, such as spike proteins on the COVID-19 virus, speeding drug and vaccine discovery.

Solving the Bottleneck

UCSF, a leader in cryo-EM research, has been the source of groundbreaking work to improve the resolution of microscopy images. The technology enables scientists to visualize proteins at an atomic scale — something considered impossible just a decade ago.

But the pipeline is lengthy, involving freezing samples, capturing them on multimillion dollar cryo-EM microscopes, correcting their motion and then reconstructing detailed 3D models of the molecules. To keep things running smoothly, it’s critical that the motion-correction process runs fast enough to keep pace with the new data being collected.

“Cryo-EM microscopes are very expensive instruments. You don’t want it just sitting there idle. But if we have a backlog of movies piled up in the machine’s data storage, nobody else can collect more,” said Zheng. “It’d be a waste of this expensive instrument, and slow down the research of others.”

To achieve rapid motion correction, UCSF’s Center of Advanced Electron Microscopy uses workstations with eight NVIDIA GPUs for each microscope. These workstations are needed to keep up with the cryo-EM data collection, which acquires four movies per microscope per minute.

The GPU setup can run eight jobs concurrently, taking on the iterative process of motion correction for videos with as many as 400 frames, each with nearly 100 million pixels.

To speed the development of new applications, Zheng, who’s used NVIDIA GPUs for his research for a decade, uses a workstation powered by two NVIDIA Tensor Core GPUs. The system can analyze a 70GB microscope movie in under a minute.

Accelerating COVID Research

Zheng and his colleagues also use GPUs to run alignment software for cryo-electron tomography, or cryo-ET. This technique is better suited to study slightly heterogeneous specimens like macromolecules and cells. Samples are tilted at different angles, collecting a series of images that can be aligned and reconstructed into a detailed 3D model.

NVIDIA GPUs can fully automate the reconstruction process, taking a half hour on a single GPU, he says.

In a recent paper in Science, Zheng collaborated with lead researchers from the Netherlands’ Leiden University Medical Center to use cryo-ET to study molecular pores involved in COVID-19 virus replication in cells. A better understanding of this pore structure could help scientists develop a drug that targets it, blocking the virus from replicating in an infected patient.

To learn more about Zheng’s work, watch this on-demand talk from the GPU Technology Conference.

Main image shows a cryo-EM density map for the enzyme beta-galactosidase, showing the gradual increase in quality of the cryo-EM structures from low to high resolution. Image by Veronica Falconieri and Sriram Subramaniam, licensed from the National Cancer Institute under public domain.

The post Freeze the Day: How UCSF Researchers Clear Up Cryo-EM Images with GPUs appeared first on The Official NVIDIA Blog.

Read More

Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering

The journey to making the upcoming film Gods of Mars changed course dramatically once real-time rendering entered the picture.

The movie, currently in production, features a mix of cinematic visual effects with live-action elements. The film crew had planned to make the movie primarily using real-life miniature figures. But they switched gears once they experienced the power of real-time NVIDIA RTX graphics and Unreal Engine.

Director Peter Hyoguchi and producer Joan Webb used an Epic MegaGrant from Epic Games to bring together VFX professionals and game developers to create the film. The virtual production started with scanning the miniature models and animating them in Unreal Engine.

“I’ve been working as a CGI and VFX supervisor for 20 years, and I never wanna go back to older workflows,” said Hyoguchi. “This is a total pivot point for the next 100 years of cinema — everyone is going to use this technology for their effects.”

Hyoguchi and team produced rich, photorealistic worlds in 4K to create rich, intergalactic scenes using a combination of NVIDIA Quadro RTX 6000 GPU-powered Lenovo ThinkStation P920 workstations, ASUS ProArt Display PA32UCX-P monitors, Blackmagic Design cameras and DaVinci Resolve, and the Wacom Cintiq Pro 24.

Stepping Outside the Ozone: Technology Makes Way for More Creativity

Gods of Mars tells the tale of a fighter pilot who leads a team against rebels in a battle on Mars. The live-action elements of the film are supported by LED walls with real-time rendered graphics created from Unreal Engine. Actors are filmed on-set, with a virtual background projected behind them.

To keep the set minimal, the team only builds what actors will physically interact with, and then uses the projected environment from Unreal Engine for the rest of the scenes.

One big advantage of working with digital environments and assets is real-time lighting. When previously working with CGI, Hyoguchi and his team would pre-visualize everything inside a grayscale environment. Then they’d wait hours for one frame to render before seeing a preview of what an image or scene would look like.

With Unreal Engine, Hyoguchi can have scenes ray-trace rendered immediately with lights, shadows and colors. He can move around the environment and see how everything would look in the scene, saving weeks of pre-planning.

Real-time rendering also saves money and resources. Hyoguchi doesn’t need to spend thousands of dollars for render farms, or wait weeks for one shot to complete rendering. The RTX-powered ThinkStation P920 renders everything in real time, which leads to more iterations, making way for a much more efficient, flexible and faster creative workflow.

“Ray tracing is what makes this movie possible,” said Hyoguchi. “With NVIDIA RTX and the ability to do real-time ray tracing, we can make a movie with low cost and less people, and yet I still have the flexibility to make more creative choices than I’ve ever had in my life.”

Hyoguchi and his team are shooting the film with Blackmagic Design’s new URSA Mini Pro 12K camera. Capturing such high-resolution footage provides more options in post-production. They can crop images or zoom in for a close-up shot of an actor without worrying about losing resolution.

They can also color and edit scenes in real time using Blackmagic DaVinci Resolve Studio, which uses NVIDIA GPUs to accelerate editing workflows. With the 32-inch ASUS ProArt Display PA32UCX-P monitors, the team calibrated their screens so all the artists can see the same rendered color and details, even while working in different locations across the country.

The Wacom Cintiq Pro 24 pen displays speed up the 3D artist’s workflow, and provides a natural connection between the artist and the Unreal editor, both when moving scene elements around to create the 3D environment and when keyframing actors for animation.

Learn more about Gods of Mars and NVIDIA RTX.

The post Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering appeared first on The Official NVIDIA Blog.

Read More

New Year, New Energy: Leading EV Makers Kick Off 2021 with NVIDIA DRIVE

Electric vehicle upstarts have gained a foothold in the industry and are using NVIDIA DRIVE to keep that momentum going.

Nowhere is the trend of electric vehicles more apparent than in China, the world’s largest automotive market, where electric vehicle startups have exploded in popularity. NIO, Li Auto and Xpeng are bolstering the initial growth in new energy vehicles with models that push the limits of everyday driving with extended battery range and AI-powered features.

All three companies doubled their sales in 2020, with a combined volume of more than 103,000 vehicles.

Along with more efficient powertrains, these fleets are also introducing new and intelligent features to daily commutes with NVIDIA DRIVE.

NIO Unveils a Supercharged Compute Platform

Last week, NIO announced a supercomputer to power its automated and autonomous driving features, with NVIDIA DRIVE Orin at its core.

The computer, known as Adam, achieves over 1,000 trillion operations per second (TOPS) of performance with the redundancy and diversity necessary for safe autonomous driving. It also enables personalization in the vehicle, learning from individual driving habits and preferences while continuously improving from fleet data.

The Orin-powered supercomputer will debut in the flagship ET7 sedan, scheduled for production in 2022, and will be in every NIO model to follow.

The NIO ET7, powered by NVIDIA DRIVE Orin.

The ET7 leapfrogs current model capabilities, with more than 600 miles of battery range and advanced autonomous driving. As the first vehicle equipped with Adam, the EV can perform point-to-point autonomy, leveraging 33 sensors and high-performance compute to continuously expand the domains in which it operates  — from urban to highway driving to battery swap stations.

With this centralized, software-defined computing architecture, NIO’s future fleet of EVs will feature the latest AI-enabled capabilities designed to make its vehicles perpetually upgradable.

Li Auto Powers Ahead

In September, standout EV maker Li Auto said it would develop its next generation of electric vehicles using NVIDIA DRIVE AGX Orin.

These new vehicles are being developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

This high-performance platform will enable Li Auto to deploy an independent, advanced autonomous driving system with its next-generation fleet.

The automaker began rolling out its first vehicle, the Li Auto One SUV, in November 2019. Since then, sales have skyrocketed, with a 530 percent increase in volume in December, year-over-year, and a total of 32,624 vehicles in 2020.

The Li Auto One

Li Auto plans to continue this momentum with its upcoming models, packed with even more intelligent features enabled by NVIDIA DRIVE.

Cruising on Xpeng XPilot

Xpeng has been developing on DRIVE since 2018, developing a level 3 autopilot system in collaboration with Desay.

The technology debuted last April with the Xpeng P7, an all-electric sports sedan developed from the ground up for an intelligent driving future.

The Xpeng P7

The XPilot 3.0 level 3 autonomous driving system leverages NVIDIA DRIVE AGX Xavier as well as a redundant and diverse halo of sensors for automated highway driving and valet parking. XPilot was born in the data center, with NVIDIA’s AI infrastructure for training and testing self-driving deep neural networks.

With high-performance data center GPUs and advanced AI learning tools, this scalable infrastructure allows developers to manage massive amounts of data and train autonomous driving DNNs.

The burgeoning EV market is driving the next decade of personal transportation. And with NVIDIA DRIVE at the core, these vehicles have the intelligence and performance to go the distance.

The post New Year, New Energy: Leading EV Makers Kick Off 2021 with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Adam and EV: NIO Selects NVIDIA for Intelligent, Electric Vehicles

Chinese electric automaker NIO will use NVIDIA DRIVE for advanced automated driving technology in its future fleets, marking the genesis of truly intelligent and personalized NIO vehicles.

During a global reveal event, the EV maker took the wraps off its latest ET7 sedan, which starts shipping in 2022 and features a new NVIDIA-powered supercomputer, called Adam, that uses NVIDIA DRIVE Orin to deploy advanced automated driving technology.

“The cooperation between NIO and NVIDIA will accelerate the development of autonomous driving on smart vehicles,” said NIO CEO William Li. “NIO’s in-house developed autonomous driving algorithms will be running on four industry-leading NVIDIA Orin processors, delivering an unprecedented 1,000+ trillion operations per second in production cars.”

The announcement marks a major step toward the widespread adoption of intelligent, high-performance electric vehicles, improving standards for both the environment and road users.

NIO has been a pioneer in China’s premium smart electric vehicle market. Since 2014, the automaker has been leveraging NVIDIA for its seamless infotainment experience. And now, with NVIDIA DRIVE powering automated driving features in its future vehicles, NIO is set to redefine mobility with continuous improvement and personalization.

“Autonomy and electrification are the key forces transforming the automotive industry,” said Jensen Huang, NVIDIA founder and CEO. “We are delighted to partner with NIO, a leader in the new energy vehicle revolution—leveraging the power of AI to create the software-defined EV fleets of the future.”

An Intelligent Creation

Software-defined and intelligent vehicles require a centralized, high-performance compute architecture to power AI features and continuously receive upgrades over the air.

The new NIO Adam supercomputer is one of the most powerful platforms to run in a vehicle. With four NVIDIA DRIVE Orin processors, Adam achieves more than 1,000 TOPS of performance.

Orin is the world’s highest-performance, most-advanced AV and robotics processor. This supercomputer on a chip is capable of delivering up to 254 TOPS to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D.

By using multiple SoCs, Adam integrates the redundancy and diversity necessary for safe autonomous operation. The first two SoCs process the 8 gigabytes of data produced by the vehicle’s sensor set every second. The third Orin serves as a backup to ensure the system can still operate safely in any situation, while the fourth enables local training, improving the vehicle with fleet learning as well as personalizing the driving experience based on individual user preferences.

With high-performance compute at its core, Adam is a major achievement in the creation of automotive intelligence and autonomous driving.

Meet the ET7

NIO took the wraps off its much-anticipated ET7 sedan — the production version of its original EVE concept first shown in 2017.

The flagship vehicle leapfrogs current model capabilities, with more than 600 miles of battery range and advanced autonomous driving. As the first vehicle equipped with Adam, the ET7 can perform point-to-point autonomy, leveraging 33 sensors and high-performance compute to continuously expand the domains in which it operates  — from urban to highway driving to battery swap stations.

The intelligent sedan ensures a seamless experience from the moment the driver approaches the car. With a highly accurate digital key and soft-closing doors, users can open the car with a gentle touch. Enhanced driver monitoring and voice recognition enable easy interaction with the vehicle. And sensors on the bottom of the ET7 detect the road surface so the vehicle can automatically adjust the suspension for a smoother ride.

With AI now at the center of the NIO driving experience, the ET7 and upcoming NVIDIA-powered models are heralding the new generation of intelligent transportation.

The post Adam and EV: NIO Selects NVIDIA for Intelligent, Electric Vehicles appeared first on The Official NVIDIA Blog.

Read More

Mercedes-Benz Transforms Vehicle Cockpit with NVIDIA-Powered AI

The AI cockpit has reached galactic proportions with the new Mercedes-Benz MBUX Hyperscreen.

During a digital event, the luxury automaker unveiled the newest iteration of its intelligent infotainment system — a single surface extending from the cockpit to the passenger seat displaying all necessary functions at once. Dubbed the MBUX Hyperscreen, the system is powered by NVIDIA technology and shows how AI can create a truly intuitive and personalized experience for both the driver and passengers.

“The MBUX Hyperscreen reinvents how we interact with the car,” said Sajjad Khan, executive vice president at Mercedes-Benz. “It’s the nerve center that connects everyone in the car with the world.”

Like the MBUX system recently unveiled with the new Mercedes-Benz S-Class, this extended-screen system runs on high-performance, energy-efficient NVIDIA GPUs for instantaneous AI processing and sharp graphics.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting the temperature. Using NVIDIA technology, Mercedes-Benz consolidated these components into one AI platform — with three separate screens under one glass surface — to simplify the architecture while creating more space to add new features.

“Zero Layer” User Interface

The driving principle behind the MBUX Hyperscreen is that of the “zero layer” — every necessary driving feature is delivered with a single touch.

However, developing the largest screen ever mounted in a series-built Mercedes-Benz was not enough to achieve this groundbreaking capability. The automaker also leveraged AI to promote commonly used features at relevant times while pushing those not needed to the background.

The deep neural networks powering the system process datasets such as vehicle position, cabin temperature and time of day to prioritize certain features — like entertainment or points of interest recommendations — while always keeping navigation at the center of the display.

“The system always knows what you want and need based on emotional intelligence,” Khan explained.

And these features aren’t just for the driver. Front-seat passengers get a dedicated screen for entertainment and ride information that doesn’t interfere with the driver’s display. It also enables the front seat passenger to share content with others in the car.

Experience Intelligence

This revolutionary AI cockpit experience isn’t a mere concept — it’s real technology that will be available in production vehicles this year.

The MBUX Hyperscreen will debut with the all-electric Mercedes-Benz EQS, combining electric and artificial intelligence. With the first generation MBUX now in 1.8 million cars, the next iteration coming in the redesigned S-Class, and now, the MBUX Hyperscreen in the EQS, customers will have a range of AI cockpit options.

And with the entire MBUX family powered by NVIDIA, these systems will constantly deliver new, surprising and intelligent features with high performance and a seamless experience.

The post Mercedes-Benz Transforms Vehicle Cockpit with NVIDIA-Powered AI appeared first on The Official NVIDIA Blog.

Read More

In a Quarantine Slump? How One High School Student Used AI to to Stay on Track

Canadian high schooler Ana DuCristea has a clever solution for the quarantine slump.

Using AI and natural language processing, she programmed an app capable of setting customizable reminders so you won’t miss any important activities, like baking banana bread or whipping up Dalgona coffee.

The project’s emblematic of how a new generation – with access to powerful technology and training — approaches the once exotic domain of AI.

A decade ago, deep learning was the stuff of elite research labs with big budgets.

Now it’s the kind of thing a smart, motivated high school student can knock out to solve a tangible problem.

DuCristea’s been interested in coding from childhood, and spends her spare time teaching herself new skills and taking online AI courses. After winning a Jetson Nano Developer Kit this summer at AI4ALL, an AI camp, she set to work remedying one of her pet peeves — the limited functionality of reminder applications.

She’s long envisioned a more useful app that could snooze for more specific lengths of time, and set reminders for specific tasks, dates and times. Using the Nano and her background on Python, DuCristea spent her after-school hours creating an app that does just that.

With the app, users can message a bot on Discord requesting a reminder for a specific task, date and time. DuCristea has shared the app’s code on Github, and is planning to continue training it to increase its accuracy and capabilities.

Key Points From This Episode:

Her first hands-on experience with the Jetson Nano has only strengthened her intent to pursue software or computer engineering at college, where she’ll continue to learn more about what area of STEM she’d like to focus on.

  • DuCristea’s interest in programming and electronics started at age nine, when her father gifted her a book on Python and she found it so interesting that she worked through it in a week. Since then, she’s taken courses on coding and shares her most recent projects on GitHub.
  • Programming the app took some creativity, as DuCristea didn’t have a large dataset to train on. After trying neural networks and vectorization, she eventually found that template searches worked best for her limited list of examples.

Tweetables:

“There’s so many programs, even exclusively for girls now in STEM — I would say go for them.” — Ana DuCristea [14:55]

“The Jetson Nano is a lot more accessible than most things in AI right now.” — Ana DuCristea [18:51]

You Might Also Like:

AI4Good: Canadian Lab Empowers Women in Computer Science

Doina Precup, associate professor at McGill University and research team lead at AI startup DeepMind, speaks about her personal experiences, along with the AI4Good Lab she co-founded to give women more access to machine learning training.

Jetson Interns Assemble! Interns Discuss Amazing AI Robots They’re Building

NVIDIA’s Jetson interns, recruited at top robotics competitions, discuss what they’re building with NVIDIA Jetson, including a delivery robot, a trash-disposing robot and a remote control car to aid in rescue missions.

A Man, a GAN and a 1080 Ti: How Jason Antic Created ‘De-Oldify’

Jason Antic explains how he created his popular app, De-Oldify, with just an NVIDIA GeForce 1080 Ti and a generative adversarial network. The tool colors old black-and-white shots for a more modern look.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post In a Quarantine Slump? How One High School Student Used AI to to Stay on Track appeared first on The Official NVIDIA Blog.

Read More

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

Penn State University pals Brad Bogolea and Mirza Shah were living in Silicon Valley when they pitched Jeff Gee on their robotics concepts. Fortunately for them, the star designer was working at the soon-to-shutter Willow Garage robotics lab.

So the three of them — Shah was also a software engineer at Willow — joined together and in 2014 founded Simbe Robotics.

The startup’s NVIDIA Jetson-powered bot, dubbed Tally, has since rolled into more than a dozen of the world’s largest retailers. The multitasking robot can navigate stores, scan barcodes and track as many as 30,000 items an hour.

Running on Jetson enables Tally to be more efficient — it can process data from several cameras and perform onboard deep computer vision algorithms. This powerful edge AI capability enhances Tally’s data capture and processing, providing Simbe’s customers with inventory and shelf information more quickly and seamlessly while minimizing costs.

Tally makes rounds to scan store inventory up to three times a day, increasing product availability and boosting sales for retailers through reduced out of stocks, according to the company.

“We’re providing critical information on what products are not on the shelf, which products might be misplaced or mispriced and up-to-date location and availability,” said Bogolea, Simbe’s CEO.

Forecasting Magic

Using Tally, retail stores are able to better understand what’s happening on store shelves, helping them recognize missed sale opportunities and the benefits of improved inventory management, said Bogolea.

Tally’s inventory data enables its retail partners to offer better visibility to store employees and customers about what’s on store shelves — even before they enter a store.

At Schnuck Markets, for example, where Tally is deployed in 62 stores across the midwest, the retailer integrates Tally’s product location and availability into the store’s loyalty app. This allows customers and Instacart shoppers to determine a store’s availability of products and find their precise locations while shopping.

This data has been helpful with addressing the surge in online shopping under COVID-19, enabling faster order picking through services like Instacart, helping to more quickly fulfill orders.

“Those that leverage technology and data in retail are really going to separate themselves from the rest of the pack,” said Bogolea.

There’s an added benefit for store employees, too: workers who were previously busy taking inventory can now focus on other tasks like improving customer service.

In addition to Schnucks, the startup has deployments with Carrefour, Decathlon Sporting Goods, Groupe Casino and Giant Eagle.

Cloud-to-Edge AI 

AI is the key technology enabling the Tally robots to navigate autonomously in a dynamic environment, analyze the vast amount of information collected by its sensors and report a wide range of metrics such as inventory levels, pricing errors and misplaced stock.

Simbe is using NVIDIA GPUs from the cloud to the edge, helping to train and inference a variety of AI models that can detect the different products on shelves, read barcodes and price labels and detect obstacles.

Analyzing the vast amount of 2D and 3D sensor data collected from the robot, NVIDIA Jetson has enabled extreme optimization of the Tally data capture system and has also helped with localization, according to the company.

Running Jetson on Tally, Simbe is able to process data locally in real time from lidar as well as 2D and 3D cameras to aid in both product identification and navigation. And Jetson has reduced its reliance on processing in the cloud.

“We’re capturing at a far greater frequency and fidelity than has really ever been seen before,” said Bogolea.

“One of the benefits of leveraging NVIDIA Jetson is it gives us a lot of flexibility to start moving more to the edge, reducing our cloud costs.”

Learn more about NVIDIA Jetson, which is used by enterprise customers, developers and DIY enthusiasts for creating AI applications, as well as students and educators for learning and teaching AI.

The post AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue appeared first on The Official NVIDIA Blog.

Read More

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Brendon Cassidy, CTO and chief scientist at Super Hi-Fi, uses AI to give everyone the experience of a radio station tailored to their unique tastes.

Super Hi-Fi, an AI startup and member of the NVIDIA Inception program, develops technology that produces smooth transitions, intersperses content meaningfully and adjusts volume and crossfade. Started three years ago, Super Hi-Fi first partnered with iHeartRadio and is now also used by companies such as Peloton and Sonos.

Results are showing that users like this personalized approach. Cassidy notes that they tested MagicStitch, one of their tools that eliminates the gap between songs, and found that customers listening with MagicStitch turned on spent 10 percent more time streaming music.

Cassidy’s a veteran of the music industry — from Virgin Digital to the Wilshire Media Group — and recognizes this music experience is finally possible due to GPU acceleration, accessible cloud resources and AI powerful enough to process and learn from music and audio content from around the world.

Key Points From This Episode:

  • Cassidy, a radio DJ during his undergraduate and graduate careers, notes how difficult it is to “hit the post” — or to stop speaking just as the singing of the next song begins. Super Hi-Fi’s AI technology is using deep learning to understand and achieve that timing.
  • Super Hi-Fi’s technology is integrated into the iHeartRadio app, as well as Sonos Radio stations. Cassidy especially recommends the “Encyclopedia of Brittany” station, which is curated by Alabama Shakes’ musician Brittany Howard and integrates commentary and music.

Tweetables:

“This AI is trying to create a form of art in the listening experience.” — Brendon Cassidy [14:28]

“I hope we’re improving the enjoyment that listeners are getting from all of the musical experiences that we have.” — Brendon Cassidy [28:55]

You Might Also Like:

How Yahoo Uses AI to Create Instant eSports Highlight Reels

Like any sports fan, eSports followers want highlight reels of their kills and thrills as soon as possible, whether it’s StarCraft II, League of Legends or Heroes of the Storm. Yale Song, senior research scientist at Yahoo! Research, explains how AI can make instant eSports highlight reels.

Pierre Barreau Explains How Aiva Uses Deep Learning to Make Music

AI systems have been trained to take photos and transform them into the style of great artists, but now they’re learning about music. Pierre Barreau, head of Luxembourg-based startup Aiva Technologies, talks about the soaring music composed by an AI system — and used as the theme song of the AI Podcast.

How Tattoodo Uses AI to Help You Find Your Next Tattoo

What do you do when you’re at a tattoo parlor but none of the images on the wall strike your fancy? Use Tattoodo, an app that uses deep learning to help create a personalized tattoo.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound appeared first on The Official NVIDIA Blog.

Read More

Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020

Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020

Much of 2020 may look best in the rearview mirror, but the year also held many moments of outstanding work, gems worth hitting the rewind button to see again.

So, here’s a countdown — roughly in order of ascending popularity — of 10 favorite NVIDIA videos that hit YouTube in 2020. With two exceptions for videos that deserve a wide audience, all got at least 200,000 views and most, but not all, can be found on the NVIDIA YouTube channel.

#10 Coronavirus Gets a Close-Up

The pandemic was clearly the story of the year.

We celebrated the work of many healthcare providers and researchers pushing science forward to combat it, including the team that won a prestigious Gordon Bell award for using high performance computing and AI to see how the coronavirus works, something they explained in detail in their own video here.

In another one of the many responses to COVID-19, the Folding@Home project received donations of time on more than 200,000 NVIDIA GPUs to study the coronavirus. Using NVIDIA Omniverse, we created a visualization (described below) of data they amassed on their virtual exascale computer.

#9 Cruising into a Ray-Traced Future

Despite the challenging times, many companies continued to deliver top-notch work. For example, Autodesk VRED 2021 showed the shape of things to come in automotive design.

The demo below displays the power of ray tracing and AI to deliver realistic 3D visualizations in real time using RTX technology, snagging nearly a quarter million views. (Note: There’s no audio on this one, just amazing images.)

#8 A Test Drive in the Latest Mercedes

Just for fun — yes, even 2020 included fun — we look back at NVIDIA CEO Jensen Huang taking a spin in the latest Mercedes-Benz S-Class as part of the world premiere of the flagship sedan. He shared the honors with Grammy award-winning Alicia Keys and Formula One champ Lewis Hamilton.

The S-Class uses AI to deliver intelligent features like a voice assistant personalized for each driver. An engineer and a car enthusiast at heart, Huang gave kudos to the work of hundreds of engineers who delivered a vehicle that with over-the-air software updates will get better and better.

#7 Playing Marbles After Dark

The NVIDIA Omniverse team pointed the way to a future of photorealistic games and simulations rendered in real time. They showed how a distributed team of engineers and artists can integrate multiple tools to play more than a million polygons smoothly with ray-traced lighting at 1440p on a single GeForce RTX 3090.

The mesmerizing video captured the eyeballs of nearly half a million viewers.

#6 An AI Platform for the Rest of Us

Great things sometimes come in small packages. In October, we debuted the DGX Station A100, a supercomputer that plugs into a standard wall socket to let data scientists do world-class work in AI. More than 400,000 folks tuned in.

#5 Seeing Virtual Meetings Through a New AI

With online gatherings the new norm, NVIDIA Maxine attracted a lot of eyeballs. More than 800,000 viewers tuned into this demo of how we’re using generative adversarial networks to lower the bandwidth and turn up the quality of video conferencing.

#4 What’s Jensen Been Cooking?

Our most energy-efficient video of 2020 was a bit of a tease. It lasted less than 30 seconds, but Jensen Huang’s preview of the first NVIDIA Ampere architecture GPU drew nearly a million viewers.

#3 Voila, Jensen Whips Up the First Kitchen Keynote

In the days of the Great Depression, vacuum tubes flickered with fireside chats. The 2020 pandemic spawned a slew of digital events with GTC among the first of them.

In May, Jensen recorded in his California home the first kitchen keynote. In a playlist of nine virtual courses, he served a smorgasbord where the NVIDIA A100 GPU was an entrée surrounded by software side dishes that included frameworks for conversational AI (Jarvis) and recommendation systems (Merlin). The first chapter alone attracted more than 300,000 views.

And we did it all again in October when we featured the first DPU, its DOCA software and a framework to accelerate drug discovery.

#2 Delivering Enterprise AI in a Box

The DGX A100 emerged as one of the favorite dishes from our May kitchen keynote. The 5-petaflops system packs AI training, inference and analytics for any data center.

Some 1.3 million viewers clicked to get a virtual tour of the eight A100 GPUs and 200 Gbit/second InfiniBand links inside it.

#1 Enough of All This Hard Work, Let’s Have Fun!

By September it was high time to break away from a porcupine of a year. With the GeForce RTX 30 Series GPUs, we rolled out engines to create lush new worlds for those whose go-to escape is gaming.

The launch video, viewed more than 1.5 million times, begins with a brief tour of the history of computer games. Good days remembered, good days to come.

For Dessert: Two Bytes of Chocolate

We’ll end 2020, happily, with two special mentions.

Our most watched video of the year was a blistering five-minute clip of game play on DOOM Eternal running all out on a GeForce RTX 3080 in 4K.

And perhaps our sweetest feel good moment of 2020 was delivered by an NVIDIA engineer, Bryce Denney, who hacked a way to let choirs sing together safely in the pandemic. Play it again, Bryce!

 

The post Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020 appeared first on The Official NVIDIA Blog.

Read More