Drug Discovery Gets Jolt of AI via NVIDIA Collaborations with AstraZeneca, U of Florida Health

NVIDIA is collaborating with biopharmaceutical company AstraZeneca and the University of Florida’s academic health center, UF Health, on new AI research projects using breakthrough transformer neural networks.

Transformer-based neural network architectures — which have become available only in the last several years — allow researchers to leverage massive datasets using self-supervised training methods, avoiding the need for manually labeled examples during pre-training. These models, equally adept at learning the syntactic rules to describe chemistry as they are at learning the grammar of languages, are finding applications across research domains and modalities.

NVIDIA is collaborating with AstraZeneca on a transformer-based generative AI model for chemical structures used in drug discovery that will be among the very first projects to run on Cambridge-1, which is soon to go online as the UK’s largest supercomputer. The model will be open sourced, available to researchers and developers in the NVIDIA NGC software catalog, and deployable in the NVIDIA Clara Discovery platform for computational drug discovery.

Separately, UF Health is harnessing NVIDIA’s state-of-the-art Megatron framework and BioMegatron pre-trained model — available on NGC — to develop GatorTron, the largest clinical language model to date.

New NGC applications include AtacWorks, a deep learning model that identifies accessible regions of DNA, and MELD, a tool for inferring the structure of biomolecules from sparse, ambiguous or noisy data.

Megatron Model for Molecular Insights

The MegaMolBART drug discovery model being developed by NVIDIA and AstraZeneca is slated for use in reaction prediction, molecular optimization and de novo molecular generation. It’s based on AstraZeneca’s MolBART transformer model and is being trained on the ZINC chemical compound database — using NVIDIA’s Megatron framework to enable massively scaled-out training on supercomputing infrastructure.

The large ZINC database allows researchers to pretrain a model that understands chemical structure, bypassing the need for hand-labeled data. Armed with a statistical understanding of chemistry, the model will be specialized for a number of downstream tasks, including predicting how chemicals will react with each other and generating new molecular structures.

“Just as AI language models can learn the relationships between words in a sentence, our aim is that neural networks trained on molecular structure data will be able to learn the relationships between atoms in real-world molecules,” said Ola Engkvist, head of molecular AI, discovery sciences, and R&D at AstraZeneca. “Once developed, this NLP model will be open source, giving the scientific community a powerful tool for faster drug discovery.”

The model, trained using NVIDIA DGX SuperPOD, gives researchers ideas for molecules that don’t exist in databases but could be potential drug candidates. Computational methods, known as in-silico techniques, allow drug developers to search through more of the vast chemical space and optimize pharmacological properties before shifting to expensive and time-consuming lab testing.

This collaboration will use the NVIDIA DGX A100-powered Cambridge-1 and Selene supercomputers to run large workloads at scale. Cambridge-1 is the largest supercomputer in the U.K., ranking No. 3 on the Green500 and No. 29 on the TOP500 list of the world’s most powerful systems. NVIDIA’s Selene supercomputer topped the most recent Green500 and ranks fifth on the TOP500.

Language Models Speed Up Medical Innovation

UF Health’s GatorTron model — trained on records from more than 50 million interactions with 2 million patients — is a breakthrough that can help identify patients for lifesaving clinical trials, predict and alert health teams about life-threatening conditions, and provide clinical decision support to doctors.

“GatorTron leveraged over a decade of electronic medical records to develop a state-of-the-art model,” said Joseph Glover, provost at the University of Florida, which recently boosted its supercomputing facilities with NVIDIA DGX SuperPOD. “A tool of this scale will enable healthcare researchers to unlock insights and reveal previously inaccessible trends from clinical notes.”

Beyond clinical medicine, the model also accelerates drug discovery by making it easier to rapidly create patient cohorts for clinical trials and for studying the effect of a certain drug, treatment or vaccine.

It was created using BioMegatron, the largest biomedical transformer model ever trained, developed by NVIDIA’s applied deep learning research team using data from the PubMed corpus. BioMegatron is available on NGC through Clara NLP, a collection of NVIDIA Clara Discovery models pretrained on biomedical and clinical text.

“The GatorTron project is an exceptional example of the discoveries that happen when experts in academia and industry collaborate using leading-edge artificial intelligence and world-class computing resources,” said David R. Nelson, M.D., senior vice president for health affairs at UF and president of UF Health. “Our partnership with NVIDIA is crucial to UF emerging as a destination for artificial intelligence expertise and development.”

Powering Drug Discovery Platforms

NVIDIA Clara Discovery libraries and NVIDIA DGX systems have been adopted by computational drug discovery platforms, too, boosting pharmaceutical research.

  • Schrödinger, a leader in chemical simulation software development, today announced a strategic partnership with NVIDIA that includes research in scientific computing and machine learning, optimizing of Schrödinger applications on NVIDIA platforms, and a joint solution around NVIDIA DGX SuperPOD to evaluate billions of potential drug compounds within minutes.
  • Biotechnology company Recursion has installed BioHive-1, a supercomputer based on the NVIDIA DGX SuperPOD reference architecture that, as of January, is estimated to rank at No. 58 on the TOP500 list of the world’s most powerful computer systems. BioHive-1 will allow Recursion to run within a day deep learning projects that previously took a week to complete using its existing cluster.
  • Insilico Medicine, a partner in the NVIDIA Inception accelerator program, recently announced the discovery of a novel preclinical candidate to treat idiopathic pulmonary fibrosis — the first example of an AI-designed molecule for a new disease target nominated for clinical trials. Compounds were generated on a system powered by NVIDIA Tensor Core GPUs, taking less than 18 months and under $2 million from target hypothesis to preclinical candidate selection.
  • Vyasa Analytics, a member of the NVIDIA Inception accelerator program, is using Clara NLP and NVIDIA DGX systems to give its users access to pretrained models for biomedical research. The company’s GPU-accelerated Vyasa Layar Data Fabric is powering solutions for multi-institutional cancer research, clinical trial analytics and biomedical data harmonization.

Learn more about NVIDIA’s work in healthcare at this week’s GPU Technology Conference, which kicks off with a keynote address by NVIDIA CEO Jensen Huang. Registration is free. The healthcare track includes 16 live webinars, 18 special events and over 100 recorded sessions.

Subscribe to NVIDIA healthcare news and follow NVIDIA Healthcare on Twitter.

The post Drug Discovery Gets Jolt of AI via NVIDIA Collaborations with AstraZeneca, U of Florida Health appeared first on The Official NVIDIA Blog.

Read More

An Engine of Innovation: Sony Levels Up for the AI Era

If you want to know what the next big thing will be, ask someone at a company that invents it time and again.

“AI is a key tool for the next era, so we are providing the computing resources our developers need to generate great AI results,” said Yuichi Kageyama, general manager of Tokyo Laboratory 16, in R&D Center for Sony Group Corporation.

Called GAIA internally, the lab’s computing resources act as a digital engine serving all Sony Group companies. And it’s about to get a second fuel injection of accelerated computing for AI efforts across the corporation.

Sony’s engineers are packing machine-learning smarts into products from its Xperia smartphones, its entertainment robot, aibo, and a portfolio of imaging components for everything from professional and consumer cameras to factory automation and satellites. It’s even using AI to build the next generation of advanced imaging chips.

More Zip, Fewer Tolls

To move efficiently into the AI era, Sony is installing a cluster of NVIDIA DGX A100 systems linked on an NVIDIA Mellanox  InfiniBand network. It expands an existing system now running at near full utilization with NVIDIA V100 Tensor Core GPUs, commissioned in October when the company brought AI training in house.

“When we were using cloud services, AI developers worried about the costs, but now they can focus on AI development on GAIA,” said Kageyama.

An in-house AI engine torques performance, too. One team designed a deep-learning model for delivering super-resolution images and trained it nearly 16x faster by adding more resources to the job, shortening a month’s workload to a day.

“With the computing power of the DGX A100, its expanded GPU memory and faster InfiniBand networking, we expect to see even greater performance on larger datasets,” said Yoshiki Tanaka, who oversees HPC and distributed deep learning technologies for Sony’s developers.

Powering an AI Pipeline

Sony posted fast speeds in deep learning back in 2018, accelerating its Neural Network Libraries on a system at Japan’s National Institute of Advanced Industrial Science and Technology. And it’s already rolling out products powered with machine learning, such as its Airpeak drone for professional filmmakers shown at CES this year.

There’s plenty more to come.

“We will see good results in our fiscal 2021 because we have collaborations with many business teams who have started some good projects,” Kageyama said.

NVIDIA is putting its shoulder to the wheel with software and services to “build a culture of using GPUs,” he added.

For example, Sony developers use NGC, NVIDIA’s online container registry, for all the software components they need to get an AI app up and running.

Sony even created a container of its own, now available on NGC, sporting its Neural Network Libraries and other utilities. It supplements NVIDIA’s containers for work in popular environments like PyTorch and TensorFlow.

Drivers Give a Thumbs Up

Developers tell Kageyama’s team that having their code in one place helps simplify and speed their work.

Some researchers use the system for high performance computing, tapping into NVIDIA’s CUDA software that accelerates a diverse set of technical applications including AI.

To keep it all running smoothly, NVIDIA provided a job scheduler as well as additions for Sony to NVIDIA’s libraries for scaling apps across multiple GPUs.

“Good management software is important for achieving fairness and high utilization on such a complex system,” said Masahiro Hara, who leads development of the GAIA system.

An Eye Toward Analytics

NVIDIA also helped Sony create training programs on how to use its software on GAIA.

Looking ahead, Sony is interested in expanding its work in data analytics and simulations. It’s evaluating RAPIDS, open-source software NVIDIA helped design to let Python programmers access the power of GPUs for data science.

At the end of a work-from-home day keeping Sony ahead of the pack in AI, Kageyama enjoys playing with his kids who keep their dad on his digital toes. “I’m a beginner in Minecraft, and they’re much better than me,” he said.

The post An Engine of Innovation: Sony Levels Up for the AI Era appeared first on The Official NVIDIA Blog.

Read More

Siege the Day as Stronghold Series Headlines GFN Thursday

It’s Thursday, which means it’s GFN Thursday — when GeForce NOW members can learn what new games and updates are streaming from the cloud.

This GFN Thursday, we’re checking in on one of our favorite gaming franchises, the Stronghold series from Firefly Studios. We’re also sharing some sales Firefly is running on the Stronghold franchise. And of course, we have more games joining the GeForce NOW library.

Fortify Your Castle

Stronghold: Warlords on GeForce NOW
Build your castle, defend it, and expand your kingdom. That’s the Stronghold way.

The Stronghold series focuses on “castle sim” gameplay, challenging players to become the lord of their kingdom. Your goal is to build a stable economy and raise a strong military to defend against invaders, destroy enemy castles and accomplish your mission objectives.

Firefly’s latest entry in the series, Stronghold: Warlords, expands on the formula by granting options to recruit, upgrade and command AI-controlled warlords that increase your influence and give you more options in each battlefield.

As you answer threats from Great Khans, Imperial warlords and Shōgun commanders, you’ll lead your forces to victory and defend against massive sieges. It wouldn’t be a Stronghold game if you didn’t watch the great hordes of your enemies collapse as they crash against your defenses.

Stronghold: Warlords joined the GeForce NOW library at the game’s release on March 9, and members can muster their forces across nearly all of their devices, even low-powered rigs or Macs.

“Rather than porting to new platforms, we love that GeForce NOW can stream the real PC version to players regardless of their system,” said Nicholas Tannahill, marketing director at Firefly Studios. “We can focus on improving one build, whilst our players can take their kingdoms with them.”

A Kingdom for the Ages

Stronghold: Warlords on GeForce NOW
GeForce NOW members can oversee each skirmish in Stronghold: Warlords across all their supported devices.

Firefly only released Stronghold: Warlords a month ago, but has robust plans for content updates for players.

Those plans include a free update on April 13 that adds a new AI character, Sun Tzu, plus AI invasions in Free Build mode, and a new Free Build map. This update is just the beginning for how Firefly will continue giving gamers new challenges to master as they grow their kingdom.

To celebrate our work with Firefly to bring the Stronghold franchise to GeForce NOW, the studio’s games are currently on sale on Steam. Members can find more info on the Firefly games streaming from the cloud, and their current Steam discounts, below.

Let’s Play Today

Of course, GFN Thursday has even more games in store for members. In addition to the rest of the Stronghold franchise, members can look for the following games to join our library:

  • Aron’s Adventure (day-and-date release on Steam, April 7)
  • The Legend of Heroes: Trails of Cold Steel IV (day-and-date release on Steam, April 9)
  • EARTH DEFENSE FORCE: IRON RAIN (Steam)
  • Spintires (Steam)
  • Stronghold Crusader HD (80 percent off on Steam for a limited time)
  • Stronghold 2: Steam Edition (60 percent off on Steam for a limited time)
  • Stronghold HD (70 percent off on Steam for a limited time)
  • Stronghold Crusader 2 (90 percent off on Steam for a limited time)
    • Stronghold Crusader 2 DLC – (20-50 percent off on Steam for a limited time)
  • Stronghold 3 Gold (70 percent off on Steam for a limited time)
  • Stronghold Kingdoms (free-to-play on Steam)
    • Stronghold Kingdoms (Starter Pack) (70 percent off on Steam for a limited time)
  • Stronghold Legends: Steam Edition (60 percent off on Steam for a limited time)
  • UNDER NIGHT IN-BIRTH Exe:Late[cl-r] (Steam)

Will you accept the challenge and build your kingdom in a Stronghold game this weekend? Let us know on Twitter or in the comments below.

The post Siege the Day as Stronghold Series Headlines GFN Thursday appeared first on The Official NVIDIA Blog.

Read More

NVIDIA’s Shalini De Mello Talks Self-Supervised AI, NeurIPS Successes

Shalini De Mello, a principal research scientist at NVIDIA who’s made her mark inventing computer vision technology that contributes to driver safety, finished 2020 with a bang — presenting two posters at the prestigious NeurIPS conference in December.

A 10-year NVIDIA veteran, De Mello works on self-supervised and few-shot learning, 3D reconstruction, viewpoint estimation and human-computer interaction.

She told NVIDIA AI Podcast host Noah Kravitz about her NeurIPS submissions on reconstructing 3D meshes and self-learning transformations for improving head and gaze redirection — both significant challenges for computer vision.

De Mello’s first poster demonstrates how she and her team successfully manage to recreate 3D models in motion without requiring annotations of 3D mesh, 2D keypoints or camera pose — even on such kinetic figures as animals in the wild.

The second poster takes on the issue of datasets in which large portions are unlabeled — focusing specifically on datasets consisting of images of human faces with many variables, including lighting, reflections and head and gaze orientation. De Mello achieved an architecture that could self-learn these variations and control them.

De Mello intends to continue focusing on creating self-supervising AI systems that require less data to achieve the same quality output, which she envisions ultimately helping to reduce bias in AI algorithms.

Key Points From This Episode:

  • Early in her career at NVIDIA, De Mello noticed that technologies for looking inside the car cabin weren’t as mature as the algorithms for automotive vision outside the car. She focused her research on the former, leading to the creation of NVIDIA’s DRIVE IX product for AI-based automotive interfaces in cars.
  • While science has been a lifelong passion, De Mello discovered an appreciation for art and found the perfect blend of the two in signal and image processing. She could immediately see the effects of AI on visual content.

Tweetables:

“We as humans are able to learn effectively with less data — how can we make learning systems do the same? This is a fundamental question to answer for the viability of AI” [29:29]

“Looking back at my career, the one thing I’ve learned is that it’s really important to follow your passion” [32:37]

You Might Also Like:

Behind the Scenes at NeurIPS with NVIDIA and CalTech’s Anima Anandkumar

Anima Anandkumar, NVIDIA’s director of machine learning research and Bren professor at CalTech’s CMS Department, joins AI Podcast host Noah Kravitz to talk about NeurIPS 2020 and to discuss what she sees as the future of AI.

MIT’s Jonathan Frankle on “The Lottery Hypothesis”

Jonathan Frankle, a Ph.D. student at MIT, discusses a paper he co-authored focusing on “The Lottery Hypothesis,” which promises to help advance our understanding of why neural networks, and deep learning, works so well.

NVIDIA’s Neda Cvijetic Explains the Science Behind Self-Driving Cars

Neda Cvijetic, senior manager of autonomous vehicles at NVIDIA, leads the NVIDIA DRIVE Labs series of videos and blogs that break down the science behind autonomous vehicles. She takes NVIDIA AI Podcast Noah Kravitz behind the wheel of a (metaphorical) self-driving car.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post NVIDIA’s Shalini De Mello Talks Self-Supervised AI, NeurIPS Successes appeared first on The Official NVIDIA Blog.

Read More

What Will NVIDIA CEO Jensen Huang Cook Up This Time at NVIDIA GTC?

Don’t blink. Accelerated computing is moving innovation forward faster than ever.

And there’s no way to get smarter, quicker, about how it’s changing your world than to tune in to NVIDIA CEO Jensen Huang’s GTC keynote Monday, April 12, starting at 8:30 a.m. PT.

The keynote, delivered again from the kitchen in Huang’s home, will kick off a conference with more than 1,500 sessions covering just about every innovation — from quantum computing to AI — that benefits from moving faster.

Factories of the Future and More….

In his address, Huang will share the company’s vision for the future of computing from silicon to software to services, and from the edge to the data center to the cloud.

A highlight: Huang will detail NVIDIA’s vision for manufacturing and you’ll get a chance to meet “Dave,” who is exploring the Factory of the Future.

Be on the Hunt for Some Surprises

And, to have a little quick fun, we’ve added a few surprises – so be on the lookout. Watch the @NVIDIAGTC Twitter handle for clues and more details.

Stick Around

There’s no need to register for GTC to watch the keynote. But if you’re inspired, it’s a great way to explore all the trends Huang will touch on at GTC — and more.

For more than a decade, GTC has been the place to see innovations that have changed the world. More than 100,000 developers, researchers and IT professionals have already registered to join this year’s conference.

Registration is free and open to all.

Where to Watch

Mark the date — April 12 at 8:30 a.m. PT — on your calendar. Here’s where you can watch live:

U.S.:

Latin America:

Asia:

See you there.

 

The post What Will NVIDIA CEO Jensen Huang Cook Up This Time at NVIDIA GTC? appeared first on The Official NVIDIA Blog.

Read More

NVIDIA-Powered Systems Ready to Bask in Ice Lake

Data-hungry workloads such as machine learning and data analytics have become commonplace. To cope with these compute-intensive tasks, enterprises need accelerated servers that are optimized for high performance.

Intel’s 3rd Gen Intel Xeon Scalable processors (code-named “Ice Lake”), launched today, are based on a new architecture that enables a major leap in performance and scalability. These new systems are an ideal platform for enterprise accelerated computing, when enhanced with NVIDIA GPUs and networking, and include features that are well-suited for GPU-accelerated applications.

Ice Lake platform benefits for accelerated computing.

The move to PCIe Gen 4 doubles the data transfer rate from the prior generation, and now matches the native speed of NVIDIA Ampere architecture-based GPUs, such as the NVIDIA A100 Tensor Core GPU. This speeds throughput to and from the GPU, which is especially important to machine learning workloads that involve vast amounts of training data. This also improves transfer speeds for data-intensive tasks like 3D design for NVIDIA RTX Virtual Workstations accelerated by the powerful NVIDIA A40 data center GPU and others.

Faster PCIe performance also accelerates GPU direct memory access transfers. Faster I/O communication of video data between the GPU and GPUDirect for Video-enabled devices delivers a powerful solution for live broadcasts.

The higher data rate additionally enables networking speeds of 200Gb/s, such as in the NVIDIA ConnectX family of HDR 200Gb/s InfiniBand adapters and 200Gb/s Ethernet NICs, as well as the upcoming NDR 400Gb/s InfiniBand adapter technology.

The Ice Lake platform supports 64 PCIe lanes, so more hardware accelerators – including GPUs and networking cards – can be installed in the same server, enabling a greater density of acceleration per host. This also means that greater user density can be achieved for multimedia-rich VDI environments accelerated by the latest NVIDIA GPUs and NVIDIA Virtual PC software.

These enhancements allow for unprecedented scaling of GPU acceleration. Enterprises can tackle the biggest jobs by using more GPUs within a host, as well as more effectively connecting GPUs across multiple hosts.

Intel has also made Ice Lake’s memory subsystem more performant. The number of DDR4 memory channels has increased from six to eight and the data transfer rate for memory now has a maximum speed at 3,200 MHz.  This allows for greater bandwidth of data transfer from main memory to the GPU and networking, which can increase throughput for data-intensive workloads.

Finally, the processor itself has improved in ways that will benefit accelerated computing workloads. The 10-15 percent increase in instructions per clock can lead to an overall performance improvement of up to 40 percent for the CPU portion of accelerated workloads. There are also more cores — up to 76 in the Platinum 9xxx variant. This will enable a greater density of virtual desktop sessions per host, so that GPU investments in a server can go further.

We’re excited to see partners already announcing new Ice Lake systems accelerated by NVIDIA GPUs, including Dell Technologies with the Dell EMC PowerEdge R750xa, purpose built for GPU acceleration, and new Lenovo ThinkSystem Servers, built on 3rd Gen Intel Xeon Scalable processors and PCIe Gen4, with several models powered by NVIDIA GPUs.

Intel’s new Ice Lake platform, with accelerator hardware, is a great choice for enterprise customers who plan to update their data center. Its new architectural enhancements enable enterprises to run accelerated applications with better performance and at data center scale and our mutual customers will be able to quickly experience its benefits.

Visit the NVIDIA Qualified Server Catalog to see a list of GPU-accelerated server models with Ice Lake CPUs, and be sure to check back as more systems are added.

The post NVIDIA-Powered Systems Ready to Bask in Ice Lake appeared first on The Official NVIDIA Blog.

Read More

World of Difference: GTC to Spotlight AI Developers in Emerging Markets

Startups don’t just come from Silicon Valley — they hail from Senegal, Saudi Arabia, Pakistan, and beyond. And hundreds will take the stage at the GPU Technology Conference.

GTC, running April 12-16, will spotlight developers and startups advancing AI in Africa, Latin America, Southeast Asia, and the Middle East. Registration is free, and provides access to 1,500+ talks, as well as dozens of hands-on training sessions, demos and networking events.

Several panels and talks will focus on supporting developer ecosystems in emerging markets and opening access for communities to solve pressing regional problems with AI.

NVIDIA Inception, an acceleration platform for AI and data science startups, will host an Emerging Markets Pavilion where attendees can catch on-demand lightning talks from startup founders in healthcare, retail, energy and financial services. And developers from around the world will have access to online training programs through the NVIDIA Deep Learning Institute.

Beyond GTC, NVIDIA is exploring opportunities and pathways to reach data science and deep learning developers around the world. We’re working with groups like the data science competition platform Zindi to sponsor AI hackathons in Africa — and so are our NVIDIA Inception members, like Instadeep, an AI startup with offices in Tunisia, Nigeria, Kenya, England and France.

Programs like these, including the NVIDIA Developer Program, aim to support the next generation of developers, innovators and leaders with the resources to drive AI breakthroughs worldwide.

Focus on Emerging Developer Communities

While AI developers and startup founders come from diverse backgrounds and places, not all receive equivalent support and opportunities. At GTC, speakers from NVIDIA, Amazon Web Services, Google and Microsoft will join nonprofit founders and startup CEOs to discuss how we can bolster developer ecosystems in emerging markets.

Session topics include:

Startups Star in the NVIDIA Inception Pavilion

The NVIDIA Inception program includes more than 7,500 AI and data science startups from around the world. More than 300 will present at GTC.

It all kicks off after NVIDIA CEO Jensen Huang’s opening keynote on April 12, with a panel led by Jeff Herbst, our VP of business development and head of NVIDIA Inception.

The panel, AI Startups: NVIDIA Inception Insights and Trends from Around the World, will discuss efforts and challenges to nurture a broad cohort of young companies, including those from underserved and underrepresented markets. In addition to reps from NVIDIA, the panel will include Noga Tal, global director of partnerships at Microsoft for Startups; Maribel Lopez, co-founder of the Emerging Technology Research Council; and Badr Idrissi, CEO of Atlan Space, a Morocco-based NVIDIA Inception member.

Hosted by NVIDIA Inception, a virtual Emerging Markets Pavilion will feature global startups including:

Visit the GTC site to learn more and register.

The post World of Difference: GTC to Spotlight AI Developers in Emerging Markets appeared first on The Official NVIDIA Blog.

Read More

Harvesting AI: Startup’s Weed Recognition for Herbicides Grows Yield for Farmers

When French classmates Guillaume Jourdain, Hugo Serrat and Jules Beguerie were looking at applying AI to agriculture in 2014 to form a startup, it was hardly a sure bet.

It was early days for such AI applications, and people said it couldn’t be done. But farmers they spoke with wanted it.

So they rigged together a crude demo to show that a GeForce GPU could run a weed-identification network with a camera. And next thing you know, they had their first customer-investor.

In 2016, the former dorm-mates at École Nationale Supérieure d’Arts et Métiers, in Paris, founded Bilberry. The company today develops weed recognition powered by the NVIDIA Jetson edge AI platform for precision application of herbicides at corn and wheat farms, offering as much as a 92 percent reduction in herbicide usage.

Driven by advances in AI and pressures on farmers to reduce their use of herbicides, weed recognition is starting to see its day in the sun. A bumper crop of AI agriculture companies — FarmWise, SeeTree, Smart Ag and John Deere-owned Blue River — is plowing this field.

Farm Tech 2.0

Early agriculture tech was just scratching the surface of what is possible. Applying infrared, it focused on “the green on brown problem,” in which herbicides were applied uniformly to plants — crops and weeds —  versus dirt, blasting all plants, said Serrat, the company’s CTO.

Today, the sustainability race is on to treat “green on green,” or just the weeds near the crop, said Serrat.

“Making the distinction between weeds and crops and act in real time accordingly — this is where everyone is fighting for — that’s the actual holy grail,” he said. “To achieve this requires split-second inference in the field with NVIDIA GPUs running computer vision.”

Losses in corn yields due to ineffective treatment of weeds can run roughly 15 percent to 20 percent, according to Bilberry.

The startup’s customers for smart sprayers include agriculture equipment companies Agrifac, Goldacres, Dammann and Berthoud.

Cutting Back Chemicals

Bilberry deploys its NVIDIA Jetson-powered weed recognition on tractor booms that can span a U.S. football field — about 160 feet. It runs 16 cameras on 16 Jetson TX2 modules and can analyze weeds at 17 frames per second for split-second herbicide squirts while traveling 15 miles per hour.

To achieve this blazing-fast inference performance for rapid recognition of weeds, Bilberry exploited the NVIDIA JetPack SDK for TensorRT optimizations of its algorithms. “We push it to the limits,” said Serrat.

Bilberry tapped into what’s known as INT8 weight quantization, which enables more efficient application of deep learning models, particularly helpful for compact embedded systems in which memory and power restraints rule. This allowed them to harness 8-bit integers instead of floating-point numbers, and moving to integer math in place of floating-point helps reduce memory and computing usage as well as application latency.

Bilberry is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

Winners: Environment, Yields

The startup’s smart sprayers can now dramatically reduce herbicide usage by pinpointing treatments. That can make an enormous difference on the runoff of chemicals into the groundwater, the company says. It can also improve plant yields by reducing the friendly fire on crops.

“You need to apply the right amount of herbicides to weeds — if you apply too little, the weed will keep growing and creating new seeds. Bilberry can do this at a rate of 242 acres per hour, with our biggest unit” said Serrat.

The focus on agriculture chemical reduction comes as Europe tightens down on carbon cap limits affecting farmers and as consumers embrace organic foods. U.S. organic produce sales in 2020 increased 14 percent to $8.5 billion from a year ago, according to data from Nielsen.

Potato-Sorting Problem

Bilberry recently launched a potato-sorting application in partnership with Downs. Potatoes are traditionally handled by sorting potatoes moving slowly across a conveyor belt. But it’s difficult for food processors to get the labor, and the monotonous work is hard to stay focused on for hours, causing errors.

“It’s really boring — doing it all day, you become crazy,” said Serrat. “And it’s seasonal, so when they need someone, it’s now, and so they’re always having problems getting enough labor.”

This makes it a perfect task for AI. The startup trained its potato-sorting network to see bad potatoes, green potatoes, cut potatoes, rocks and dirt clods among the good spuds. And applying the Jetson Xavier to this vision task, the AI platform can send a signal to one of the doors at the end of the conveyor belt to only allow good potatoes to pass.

“This is the part I love, to build software that handles something moving and has a real impact,” he said.

 

 

The post Harvesting AI: Startup’s Weed Recognition for Herbicides Grows Yield for Farmers appeared first on The Official NVIDIA Blog.

Read More

We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever

Call it an intellectual Star Wars bar. You could run into just about anything at GTC.

Princeton’s William Tang would speak about using deep learning to unleash fusion energy, UC Berkeley’s Gerry Zhang would talk about hunting for alien signals, Airbus A3’s Arne Stoschek would describe flying autonomous pods.

Want to catch it all? Run. NVIDIA’s GPU Technology Conference has long been almost too much to take in — even if you had fresh sneakers and an employer willing to give you a few days.

But a strange thing happened when this galaxy of astronomers and business leaders and artists and game designers and roboticists went virtual, and free. More people showed up. Suddenly this galaxy of content and connections is anything but far away.

100,000+ Attendees

GTC, which kicks off April 12 with NVIDIA CEO Jensen Huang’s keynote, is a technology conference like no other because it’s not just about technology. It’s about putting technology to work to accelerate what you do, (just about) whatever you do.

We’re expecting more than 100,000 attendees to log into our latest virtual event. We’ve lined up more than 1,500 sessions, and more than 2,200 speakers. That’s more than 1,100 hours of content from 11 industries and in 13 broad topic areas.

There’s no way we could have done this if it wasn’t virtual. And now that it’s entirely virtual — right down to the “Dinner with Strangers” networking event — you can consume as much as want. No sneakers required.

For Business Leaders

Our weeklong event kicks off with a keynote on April 12 at 8:30 a.m. PT from NVIDIA founder and CEO Jensen Huang. It’ll be packed with demos and news.

Following the keynote, you’ll hear from execs at top companies, including Girish Bablani, corporate vice president for Microsoft Azure; Rene Haas, president of Arm’s IP Products Group; Daphne Koller, founder and CEO of Insitro and co-founder of Coursera; Epic Games CTO Kim Libreri; and Hildegard Wortmann, member of the board of management at Audi AG.

They’ll join leaders from Adobe, Amazon, Facebook, GE Renewable Energy, Google, Microsoft, and Salesforce, among many others.

For Developers and Those Early in Their Careers

If you’re just getting started with your career, our NVIDIA Deep Learning Institute will offer nine instructor-led workshops on a wide range of advanced software development topics in AI, accelerated computing and data science.

We also have a track of 101/Getting Started talks from our always popular “Deep Learning Demystified” series. These sessions can help anyone get oriented on the fundamentals of accelerated data analytics, high-level use cases and problem-solving methods — and how deep learning is transforming every industry.

Sessions will be offered live, online, in many time zones and in English, Chinese, Japanese and Korean. Participants can earn an NVIDIA DLI certificate to demonstrate subject-matter competency.

We’re also working with minority-serving institutions and organizations to offer their communities free seats for daylong hands-on certification classes. GTC is a forum for all communities to engage with the leading edge of AI and other groundbreaking technologies.

For Technologists

If you’re a technologist, you’ll be able to meet the minds that have created the technologies that have defined our era.

GTC will host three Turing Award winners — Yoshua Bengio, Geoffrey Hinton, Yann LeCun — whose work in deep learning has upended the technological landscape of the 21st century.

GTC will also host nine Gordon Bell winners, people who have brought the power of accelerated computing to bear on the most significant scientific challenges of our time.

Among them are Rommie Amaro, of UC San Diego; Lillian Chong of the University of Pittsburgh; computational biologist Arvind Ramanathan of Argonne National Lab; and James Phillips, a senior research programmer at the University of Illinois.

For Creators and Designers

If you’re an artist, designer or game developer, accelerated computing has long been key to creative industries of all kinds — from architecture to gaming to moviemaking.

Now, with AI, accelerated computing is being woven into the latest art. With our AI Art Gallery, 16 artists will showcase creations developed with AI.

You’ll also have multiple opportunities to participate. Highlights include a live, music-making workshop with the team from Paris-based AIVA and beatboxing sessions with Japanese composer Nao Tokui.

For Entrepreneurs and Investors

If you’re looking to build a new business — or fund one — you’ll find content by the fistfull. Start by loading up your calendar with our four AI Day for VC sessions, April 14.

Then browse sessions spotlighting startups in industries as diverse as healthcare, agriculture, and media and entertainment. Sessions will also touch on regions around the world, including Korean startups driving the self-driving car revolution, Taiwanese healthcare startups and Indian AI startups.

For Networking

While this conference may be virtual, GTC still offers plenty of networking. To connect attendees and speakers from a wide array of backgrounds, we’re continuing our longstanding “Dinner with Strangers” tradition. Attendees will have the opportunity to sit down, over Zoom, with others from their industry.

NVIDIA employee resource communities will host events including Growth for Women in Tech, the Queer in AI Mixer, the Black in AI Mixer and the LatinX in AI Mixer. We’re also launching “AI: Making (X) Better,” a series of talks featuring NVIDIA leaders from underrepresented communities who will discuss their path to AI.

Enough About Us, Make GTC About You

GTC offers an opportunity to engage with groundbreaking technologies like AI-accelerated data centers, deep learning for scientific discoveries, healthcare breakthroughs, next-generation collaboration and more.

Our advice? Register now, it’s free. Block off time in your calendar for the keynote April 12. Then hit the search bar on the conference page and look for content related to what you do — and what interests you.

Suddenly, the conference that’s all about accelerating everything is all about accelerating you.

The post We Won’t See You There: Why Our Virtual GTC’s Bigger Than Ever appeared first on The Official NVIDIA Blog.

Read More

Parsing Petabytes, SpaceML Taps Satellite Images to Help Model Wildfire Risks

When freak lightning ignited massive wildfires across Northern California last year, it also sparked efforts from data scientists to improve predictions for blazes.

One effort came from SpaceML, an initiative of the Frontier Development Lab, which is an AI research lab for NASA in partnership with the SETI Institute. Dedicated to open-source research, the SpaceML developer community is creating image recognition models to help advance the study of natural disaster risks, including wildfires.

SpaceML uses accelerated computing on petabytes of data for the study of Earth and space sciences, with the goal of advancing projects for NASA researchers. It brings together data scientists and volunteer citizen scientists on projects that tap into the NASA Earth Observing System Data and Information System data. The satellite information came from recorded images of Earth — 197 million square miles —  daily over 20 years, providing 40 petabytes of unlabeled data.

“We are lucky to be living in an age where such an unprecedented amount of data is available. It’s like a gold mine, and all we need to build are the shovels to tap its full potential,” said Anirudh Koul, machine learning lead and mentor at SpaceML.

Stoked to Make Difference

Koul, whose day job is a data scientist at Pinterest, said the California wildfires damaged areas near his home last fall. The San Jose resident and avid hiker said they scorched some of his favorite hiking spots at nearby Mount Hamilton. His first impulse was to join as a volunteer firefighter, but instead he realized his biggest contribution could be through lending his data science chops.

Koul enjoys work that helps others. Before volunteering at SpaceML, he led AI and research efforts at startup Aira, which uses augmented reality glasses to dictate for the blind what’s in front of them with image identification paired to natural language processing.

Aira, a member of the NVIDIA Inception accelerator program for startups in AI and data science, was acquired last year.

Inclusive Interdisciplinary Research 

The work at SpaceML combines volunteers without backgrounds in AI with tech industry professionals as mentors on projects. Their goal is to build image classifiers from satellite imagery of Earth to spot signs of natural disasters.

Groups take on three-week projects that can examine everything from wildfires and hurricanes to floods and oil spills. They meet monthly with scientists from NASA with domain expertise in sciences for evaluations.

Contributors to SpaceML range from high school students to graduate students and beyond. The work has included participants from Nigeria, Mexico, Korea and Germany and Singapore.

SpaceML’s team members for this project include Rudy Venguswamy, Tarun Narayanan, Ajay Krishnan and Jeanessa Patterson. The mentors are Koul, Meher Kasam and Siddha Ganju, a data scientist at NVIDIA.

Assembling a SpaceML Toolkit

SpaceML provides a collection of machine learning tools. Groups use it to work on such tasks as self-supervised learning using SimCLR, multi-resolution image search, and data labeling, among other tasks. Ease of use is key to the suite of tools.

Among their pipeline of model-building tools, SpaceML contributors rely on NVIDIA DALI for fast preprocessing of data. DALI helps with unstructured data unfit to feed directly into convolutional neural networks to develop classifiers.

“Using DALI we were able to do this relatively quickly,” said Venguswamy.

Findings from SpaceML were published at the Committee on Space Research (COSPAR) so that researchers can replicate their formula.

Classifiers for Big Data

The group developed Curator to train classifiers with a human in the loop, requiring fewer labeled examples because of its self-supervised learning. Curator’s interface is like Tinder, explains Koul, so that novices can swipe left on rejected examples of images for their classifiers or swipe right for those that will be used in the training pipeline.

The process allows them to quickly collect a small set of labeled images and use that against the GIBS Worldview set of the satellite images to find every image in the world that’s a match, creating a massive dataset for further scientific research.

“The idea of this entire pipeline was that we can train a self-supervised learning model against the entire Earth, which is a lot of data,” said Venguswamy.

The CNNs are run on instances of NVIDIA GPUs in the cloud.

To learn more about SpaceML, check out these speaker sessions at GTC 2021:

Space ML: Distributed Open-Source Research with Citizen-Scientists for Advancing Space Technology for NASA (GTC registration required to view)

Curator: A No-Code, Self-Supervised Learning and Active Labeling Tool to Create Labeled Image Datasets from Petabyte-Scale Imagery (GTC registration required to view)

The GTC keynote can be viewed on April 12 at 8:30 a.m. Pacific time and will be available for replay.

Photo credit: Emil Jarfelt, Unsplash

The post Parsing Petabytes, SpaceML Taps Satellite Images to Help Model Wildfire Risks appeared first on The Official NVIDIA Blog.

Read More