Silicon Express Lanes: AI, GPUs Pave Fast Routes for Chip Designers

AI can design chips no human could, said Bill Dally in a virtual keynote today at the Design Automation Conference (DAC), one of the world’s largest gatherings of semiconductor engineers.

The chief scientist of NVIDIA discussed research in accelerated computing and machine learning that’s making chips smaller, faster and better.

“Our work shows you can achieve orders-of-magnitude improvements in chip design using GPU-accelerated systems. And when you add in AI, you can get superhuman results — better circuits than anyone could design by hand,” said Dally, who leads a team of more than 200 people at NVIDIA Research.

Gains Span Circuits, Boards

Dally cited improvements GPUs and AI deliver across the workflow of chip and board design. His examples spanned the layout and placement of circuits to faster ways to render images of printed-circuit boards.

In one particularly stunning example of speedups with GPUs, he pointed to research NVIDIA plans to present at a conference next year. The GATSPI tool accelerates detailed simulations of a chip’s logic by more than 1,000x compared to commercial tools running on CPUs today.

GATSPI project from Dally keynote at DAC 2021
GATSPI, a GPU-accelerated simulation tool, completes work in seconds that currently takes a full day running on a CPU.

A paper at DAC this year describes how NVIDIA collaborated with Cadence Design Systems, a leading provider of EDA software, to render board-level designs using graphics techniques on NVIDIA GPUs. Their work boosted performance up to 20x for interactive operations on Cadence’s Allegro X platform, announced in June.

“Engineers used to wait for programs to respond after every edit or pan across an image — it was an awkward, frustrating way to work. But with GPUs, the flow becomes truly interactive,” said Dally, who chaired Stanford University’s computer science department before joining NVIDIA in 2009.

Reinforcement Learning Delivers Rewards

A technique called NVCell, described in a DAC session this week, uses reinforcement learning to automate the job of laying out a standard cell, a basic building block of a chip.

The approach reduces work that typically takes months for a 10-person team to an automated process that runs in a couple days. “That lets the engineering team focus on a few challenging cells that need to be designed by hand,” said Dally.

In another example of the power of reinforcement learning, NVIDIA researchers will describe at DAC a new tool called PrefixRL. It discovers how to design a circuit such as an adder, encoder or custom design.

PrefixRL treats the design process like a game where the high score is in finding the smallest area and power consumption for the circuit.

By letting AI optimize the process, engineers get a device that’s more efficient than what’s possible with today’s tools. It’s a good example of how AI can deliver designs no human could.

Leveraging AI’s Tool Box

NVIDIA worked with the University of Texas at Austin on a research project called DREAMPlace that made novel use of PyTorch, a popular software framework for deep learning. It adapted the framework used to optimize weights in a neural network to find the best spot to place a block with 10 million cells inside a larger chip.

It’s a routine job that currently takes nearly four hours using today’s state-of-the-art techniques on CPUs. Running on NVIDIA Volta architecture GPUs in a data center or cloud service, it can finish in as little as five minutes, a 43x speedup.

Getting a Clearer Image Faster

To make a chip, engineers use a lithography machine to project their design onto a semiconductor wafer. To make sure the chip performs as expected, they must accurately simulate that image, a critical challenge.

NVIDIA researchers created a neural network that understands the optical process. It simulated the image on the wafer 80x faster and with higher accuracy, using a 20x smaller model than current state-of-the-art machine learning methods.

It’s one more example of how the combination of accelerated computing and AI are helping engineers design better chips faster.

An AI-Powered Future

“NVIDIA used some of these techniques to make our existing GPUs, and we plan to use more of them in the future,” Dally said.

“I expect tomorrow’s standard EDA tools will be AI-powered to make the chip designer’s job easier and their results better than ever,” he said.

To watch Dally’s keynote, register for a complimentary pass to DAC using the code ILOVEDAC, then view the talk here.

The post Silicon Express Lanes: AI, GPUs Pave Fast Routes for Chip Designers appeared first on The Official NVIDIA Blog.

Read More

Majority Report: 2022 Predictions on How AI Will Impact Global Industries

There’s an old axiom that the best businesses thrive during periods of uncertainty. No doubt, that will be tested to the limits as 2022 portends upheaval on a grand scale.

Pandemic-related supply chain disruptions are affecting everything from production of cars and electronics to toys and toilet paper. At the same time, global food prices have jumped to their highest level in more than a decade as worker shortages, factory closures and high commodity prices shred plans at even the most sophisticated forecasting and logistics operations.

Last year, we asked some of our top experts at NVIDIA what 2021 would bring for the world of AI and accelerated computing. They predicted each would move from planning to production as businesses seek new avenues for product forecasting, supply chain management and scientific research.

Headlines over the course of the year proved them correct: To save Christmas, retailers Home Depot, Target and Walmart chartered their own cargo ships to deliver goods to their stores around the world. To speed time to market, BMW, Ericsson and other companies began using digital twin technologies to simulate real-world environments.

AI adoption isn’t limited to big names. Indeed, a midyear 2021 PWC survey of more than 1,000 businesses across nine sectors including banking, health and energy found that 86 percent of them were poised to make AI a “mainstream technology.”

This year, we went back to our experts at NVIDIA and asked them where enterprises will focus their AI efforts as they parse big data and look for new revenue opportunities.

Here’s what they had to say:

Bryan Catanzaro

BRYAN CATANZARO
Vice President of Applied Deep Learning Research

Conversational AI: Last year, I predicted conversational AI will be used to make video games more immersive by allowing real-time interaction to flesh out character-driven approaches. This year, conversational AI is all work and no play.

Companies will race to deploy new conversational AI tools that allow us to work more efficiently and effectively using natural language processing. Speech synthesis is poised to become just as emotive and persuasive as the human voice in 2022, which will help industries like retail, banking and healthcare better understand and better serve their customers.

Know Your Customer: Moving beyond natural language processing, companies using both speech and text for interaction with other businesses and customers will employ AI as they move to understand the context or sentiment in what a person might be saying. Is the customer frustrated? Is your boss being sarcastic? The adoption of tools like OpenAI Github copilot, which helps programmers be more effective at their work, will accelerate.

Sarah TariqSARAH TARIQ
Vice President of Automotive

Programmable Cars: The days of a car losing value once you drive it off the lot will soon be gone. We’ll see more automakers moving to reinvent the driving experience by creating software-defined architectures with headroom to support new applications and services via automatic over-the-air updates. Vehicles will get better and safer over time.

De-Stressing the Commute: The move to a software-defined approach also will help remove the stress and hassle of everyday driving. AI assistants will serve as your personal concierge, enhancing the vehicle journey for a safer, more convenient and enjoyable experience. Vehicle occupants will have access to intelligent services that are always on, allowing them to use real-time conversational AI for recommendations, alerts, vehicle controls and more.

Designing for the Long Haul: Automakers will begin to invest heavily in the use of simulation and digital twins to validate more of the end-to-end stack, and in training of deep neural network models. AI and data analytics will help train and validate self-driving cars for a broad range of driving conditions, delivering everyday safety that’s designed for the long haul.

Rev LebaredianREV LEBAREDIAN
Vice President of Simulation Technology, Omniverse Engineering

Emerging Standard for 3D: We’ll see advancing 3D standards for describing virtual worlds. Building accurate and rich digital counterparts to everything in the real world is one of the grandest challenges in computer science. Developers, enterprises and individual users will contribute to foundational open standards — analogous to the early days of the internet and the web.  Standards such as Universal Scene Description (USD) and glTF will rapidly evolve to meet the foundational needs of Web3 and digital twins.

Synthetic 3D Data for the Next Era of AI: The rate of innovation in AI has been accelerating for the better part of decade, but AI cannot advance without large amounts of high quality and diverse data. Today, data captured from the real world and labeled by humans is insufficient both in terms of quality and diversity to jump to the next level of artificial intelligence.  In 2022, we will see an explosion in synthetic data generated from virtual worlds by physically accurate world simulators to train advanced neural networks.

Re-Imaging Industry Through Simulation: Many industries are starting to examine and adopt digital twins and virtual worlds, thanks to the potential for operational efficiencies and cost savings. Digital representations of everything we build in the real world must have a counterpart in the virtual world—airplanes, cars, factories, bridges, cities and even Earth itself.  Applying high-fidelity simulations to digital twins allows us to experience, test and optimize complex designs well before we commit to building them in the real world.

Kimberly PowellKIMBERLY POWELL
Vice President & General Manager of Healthcare

AI Generates Million-X Drug Discovery: Simultaneous breakthroughs of AlphaFold and RoseTTAFold creating a thousandfold explosion of known protein structures and AI that can generate a thousand more potential chemical compounds have increased the opportunity to discover drugs by a million times. Molecular simulations help to model target and drug interactions completely in silico. To keep up with the million-x opportunity, AI is helping to introduce a new class of molecular simulations from system size and timescale to quantum accuracy.

AI Creates SaaS Medical Devices: The medical device industry has a game-changing opportunity, enabled by AI, to minimize and reduce costs, to automate and increase accessibility, and to continuously deliver innovation over the life of the product. Medical device companies will evolve from delivering hardware to providing software-as-a-service systems that can be upgraded remotely to keep devices usable after deployment.

AI 2.0 With Federated Learning: To help AI application developers industrialize their AI technology and expand the application’s business benefit, AI must be trained and validated on data that resides outside the possession of their group, institution and geography. Federated learning is the key to collaboratively building robust AI models and validating models in the wild without sharing sensitive data.

Anima AnandkumarANIMA ANANDKUMAR
Director of ML Research, and Bren Professor at Caltech

AI4Science: This area will continue to mature significantly and yield real-world impact. AI will deeply integrate with HPC at supercomputing scale and make scientific simulations and modeling possible at an unprecedented scale and fidelity in areas such as weather and climate models.

AI will lead to breakthroughs in discovery of new drugs and treatments and revolutionize healthcare. Federated learning and differential privacy will be widely adopted, making healthcare and other sensitive data-sharing seamless.

Algorithmic Development: Expect massive advancements in the algorithmic development that underlies simulations, as well as the capabilities of GPUs to handle reinforcement learning at scale.

Ronnie VasishtaRONNIE VASISHTA
Senior Vice President of Telecoms

AI Moves to the Telco Edge: The promise of 5G will open new opportunities for edge computing. Key benefits will include network slicing that allows customers to assign dedicated bandwidth to specific applications, ultra-low latency in non-wired environments, as well as improved security and isolation.

AI-on-5G will unlock new edge AI use cases. These include “Industry 4.0” use cases such as plant automation, factory robots, monitoring and inspection; automotive systems like toll road and vehicle telemetry applications; as well as smart spaces in retail, cities and supply chain applications.

Convergence of AI and OT Solutions: New edge AI applications are driving the growth of intelligent spaces, including the intelligent factory. These factories use cameras and other sensors for inspection and predictive maintenance. However, detection is just step one; once detected, action must be taken.This requires a connection between the AI application doing the inference and monitoring-and-control, or OT, systems that manage the assembly lines, robotic arms or pick-and-place machines.

Today, integration between these two applications relies on custom development work. This year, expect to see more integration of AI and traditional OT management solutions that simplify the adoption of edge AI in industrial environments.

Azita MartinAZITA MARTIN
Vice President & General Manager of Artificial Intelligence for Retail and C
onsumer Products Group

AI Addresses Labor Shortages: Amid a shortage of labor and increased customer demand for faster service, quick-service restaurants will employ AI for automated order taking. Thanks to advancements in natural language understanding and speech, combined with recommendation systems, fast food restaurants will roll out automated order taking to speed drive-through times and improve recommendations. In supermarkets and big-box stores, retailers will increase their use of intelligent video analytics and computer visions to create automated checkouts and autonomous or cashier-less shopping.

Enterprises Tap AI to Optimize Logistics:  AI’s greatest power is found in simplifying incredibly complex problems. Supply chain optimization will become a critical area for retailers to meet customer demands for product availability and faster delivery. AI can enable more frequent and more accurate forecasting, ensuring the right product is at the right store at the right time,

Computer vision and robotics will add AI intelligence to distribution centers. Solutions like autonomous forklifts, robots and intelligent multi-shuttle cabinets will reduce conveyor starvation and downtime and automate pick-and-pack of items to double throughput. Last-mile delivery leveraging data science will help dynamic rerouting, simulations and sub-second solver response time.

Becoming One With the Customer: Retailers sit on massive amounts of data but often have trouble processing it in real time. AI lets retailers parse the data in near real-time to have a 360 degree view of their customers,  in order to provide more personalized offers and recommendations that drive revenue and customer satisfaction. In 2022, you’ll see many retailers offering hyper-personalized shopping experiences.

Kevin LevittKEVIN LEVITT
Director of Industry and Business Development for Financial Services

Your Voice Is Your ID: Financial institutions will invest heavily in AI to fight fraud and adhere to compliance regulations such as KYC (Know Your Customer) and AML (Anti-Money Laundering). Some are using a customer’s unique voice to authenticate online transactions, while others are turning to eye biometrics for authentication.

Graph neural networks are at the forefront of the new techniques AI researchers and practitioners at financial institutions are using to understand relationships across entities and data points. They’ll become critical to enhancing fraud prevention and to mapping relationships to fight fraud more effectively.

AI for ESG: Consumers and government entities increasingly will hold enterprises accountable for environmental impacts, social and corporate governance (ESG). Companies will invest in significant computational power to run AI models, including deep learning and natural language processing models, that analyze all the data necessary to track company performance relative to ESG. It also will be needed to analyze the available data externally to measure which companies are excelling or failing relative to ESG benchmarks.

Charlie BoyleCHARLIE BOYLE
Vice President & General Manager, NVIDIA DGX Systems

Enterprises Deploy Large Language Models to Advance Understanding: In 2022, we’ll see accelerated growth in adapting large language models (LLMs) to serve more industries and use cases. Trained on massive amounts of general or industry-specific data, LLMs are able to answer deep domain questions, translate languages, comprehend and summarize documents, write stories and compute programs — all without specialized training or supervision. Already, LLMs are being used to build language- and domain-specific AI chatbots and services that improve connection and communication around the world.

Enterprises’ Next Data Centers Will Belong to Someone Else: Many businesses turned away from owning their own data centers when they moved to cloud computing, so, in 2022, companies will realize it’s time to start leveraging colocation services for high-performance AI infrastructure. The ease of deployment and access to infrastructure experts who can help ensure 24/7/365 uptime will enable more enterprises to benefit from on-demand resources delivered securely, wherever and whenever they’re needed.

Kevin DeierlingKEVIN DEIERLING
Senior Vice President of Networking

Data Center Is the New Unit of Computing: Applications that previously ran on a single computer don’t fit into a single box any more. The new world of computing increasingly will be software defined and hardware accelerated. As applications become disaggregated and leverage massive data sets, the network will be seen as the fast lane between many servers acting together as a computer. Software-defined data processing units will serve as distributed switches, load balancers, firewalls, and virtualized storage devices that stitches this data center scale computer together.

Growing Trust in Zero Trust: As applications and devices move seamlessly between the data center and the edge, enterprises will have to validate and compose from microservices. Zero trust assumes that everything and everyone connected to a company system must be authenticated and monitored to verify bad actors aren’t attempting to penetrate the network. Everything has to become protected both at the edge and on every node within the network. Data will need to be encrypted using IPSEC and TLS encryption, and every node protected with advanced routers and firewalls.

Scott McClellanSCOTT MCCLELLAN
Senior Director of the Data Science Product Group

Accelerated Data Science Platforms Thaw Enterprise Data Lakes: Much has been written about data lakes forming the foundation for enterprise big data strategies. Enterprise data lakes are effective for large scale data processing, but their broader usefulness has been largely frozen for the past few years, as they are isolated and decoupled from machine learning training and inference pipelines. In 2022, data lakes will finally modernize through end-to-end data pipelines because of three inflection points: centralized infrastructure, the agility of Kubernetes-based applications, and best-in-class, fit-to-task storage.

Mainstream AI Adoption Triggers MLOps Growth: The world’s AI pioneers built bespoke MLOps solutions to help them manage development and production AI workflows. Many early adopters that chose a cloud-based development path have been able to delay adding MLOps expertise. Enterprises are now uncovering a gap as companies expand their use of AI and bring their accelerated infrastructure on-prem. Addressing this need will trigger broad adoption of MLOps solutions in the year ahead. 

Entering the Age of Hyper-Accelerated AI Adoption

There’s no doubt the continuing pandemic has created an era of accelerated invention and reinvention for many businesses and scientific organizations. The goal is to create short-term measures that meet the needs of the day while building for long-term gains and radical change.

Will 2022 be another year of living dangerously, or smoother sailing for those businesses that tackle the uncertainty with a firmer embrace of AI?

The post Majority Report: 2022 Predictions on How AI Will Impact Global Industries appeared first on The Official NVIDIA Blog.

Read More

Omniverse Creator Takes Viewers Down an Artistic Time Tunnel in OmniRacer Video

Movies like The Matrix and The Lord of the Rings inspired a lifelong journey in computer graphics for Piotr Skiejka, a senior visual effects artist at Ubisoft.

A headshot of Piotr SkiejkaBorn in Poland and based in Singapore, Skiejka turned his childhood passion of playing with motion design — as well as compositing, lighting and rendering effects — into a career.

He was recently named a winner of the #CreateYourRetroverse contest, in which creators using NVIDIA Omniverse shared scenes that flashback to when and where their love for graphics began.

Skiejka uses the Omniverse real-time collaboration and simulation platform for his 3D workflows when designing animation, film and video game scenes.

Over the past 13 years, he’s worked on the visual effects for a filmography of nearly two dozen listings, including Marvel’s Avengers and an episode of Game of Thrones, as well as several commercials and video games.

“I enjoy learning new workflows and expanding my creative knowledge every day, especially in such an evolving field,” he said.

A Lord of the Rende(rings)

Skiejka’s creative process begins with collecting references and creating a board full of pictures. Then, he blocks scenes in Omniverse, using simple shapes and meshes to fill the space before taking a first pass at lighting.

After experimenting with a scene’s different layers, Skiejka replaces his blocked prototypes with high-resolution assets — perfecting the lighting and tweaking material textures as final touches.

Watch a stunning example of his creative process in this video, which was recognized in the NVIDIA Omniverse #CreateYourRetroverse contest:

According to Skiejka, the main challenges he previously faced were long rendering times and slow software feedback when lighting and shading.

“Now, with NVIDIA RTX technology, render time is greatly decreased, and visual feedback occurs in real time,” he said. “The Omniverse Kit framework and the Omniverse Nucleus server are superb game-changers and perfect additions to my workflow.”

Skiejka’s favorite feature of the platform, however, is the Omniverse Create scene composition application. He said it’s “packed with valuable extensions, like PhysX and Flow,” which he used while designing the retroverse scene above.

“I hope I showed the spirit of childhood in my #CreateYourRetroverse video, and that this artistic time tunnel between the past and present will inspire others to showcase their experiences, too,” Skiejka said.

Learn more about NVIDIA Omniverse.

The post Omniverse Creator Takes Viewers Down an Artistic Time Tunnel in OmniRacer Video appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday: Dashing Into December With RTX 3080 Memberships and 20 New Games

With the holiday season comes many joys for GeForce NOW members.

This month, RTX 3080 membership preorders are activating in Europe.

Plus, we’ve made a list — and checked it twice. In total, 20 new games are joining the GeForce NOW library in December. This week, the list of nine games streaming on GFN Thursday includes new releases like Chorus, Icarus and Ruined King: A League of Legends Story.

The Next Generation of Cloud Gaming Arrives in Europe

The future is NOW, with RTX 3080 memberships delivering faster frame rates, lower latency and the longest session lengths.

Starting today, gamers in Europe who preordered a six-month GeForce NOW RTX 3080 membership will have their accounts enabled with the new tier of service. Rollouts for accounts will continue until all requests have been fulfilled.

A GeForce NOW RTX 3080 membership means streaming from the world’s most powerful gaming supercomputer, the GeForce NOW SuperPOD. RTX 3080 members enjoy a dedicated, high-performance cloud gaming rig, streaming at up to 1440p resolution and 120 frames per second on PCs and Macs, and 4K HDR at 60 FPS on SHIELD TV, with ultra-low latency rivaling many local gaming experiences.

Players can power up their gaming experience with a six-month RTX 3080 membership for $99.99, pending availability. The membership comes with higher resolutions, lower latency and the longest gaming session length — clocking in at eight hours — on top of the most control over in-game settings.

Enjoy the GeForce NOW library of over 1,100 games and 100 free-to-play titles with the kick of RTX 3080 streaming across your devices. For more information about RTX 3080 memberships, check out our membership FAQ.

Preorders are still available in Europe and North America.

Decked Out in December

December kicks off with 20 great games joining GeForce NOW this month, including some out-of-this-world additions.

Enter a dark new universe, teeming with mystery and conflict in Chorus. Join Nara, once the Circle’s deadliest warrior, now their most wanted fugitive, on her mission to destroy the dark cult that created her. Take her sentient ship, Forsaken, on a quest for redemption across the galaxy and beyond the boundaries of reality as they fight to unite resistance forces against the Circle.

Icarus on GeForce NOW
Meet your deadline or be left behind forever. So much for working from home.

Survive the savage alien wilderness of Icarus, a planet once destined to be a second Earth, now humanity’s greatest mistake. Drop from the safety of the orbital space station to explore, harvest, craft and hunt while seeking your fortune from exotic matter that can be found on the abandoned, deadly planet. Make sure to return to orbit before time runs out — those that get left behind are lost forever.

Ruined King: A League of Legends Story on GeForce NOW
Need more from the world of League of Legends? Ruined King: A League of Legends Story has you covered.

Rise against ruin in Ruined King: A League of Legends Story. Unite a party of League of Legends Champions, explore the port city of Bilgewater and set sail for the Shadow Isles to uncover the secrets of the deadly Black Mist in this turn-based RPG.

Brave the far corners of space in Chorus and Icarus, and lead legends in Ruined King: A League of Legends Story alongside the nine new games ready to stream this GFN Thursday.

Releasing this week:

Also coming in December:

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

More Fun From November

Jurassic World Evolution 2 on GeForce NOW
Explore a bold new era and build your park full of dinosaurs, complete with DLSS, in Jurassic World Evolution 2.

In addition to the 17 games announced to arrive in November, members can check out the following extra games that made it to the cloud last month:

Unfortunately, a few games that we planned to release last month did not make it:

  • Bakery Simulator (Steam), new launch date
  • STORY OF SEASON: Pioneers of Olive Town (Steam), coming to GeForce NOW in the near future

With these new additions arriving just in time for the holidays, we’ve got a question for you about your gaming wish list:

your holiday wish list, but there’s only room for 1 game you can stream in the cloud

which one would it be? 🎁

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) December 1, 2021

Share your answers on Twitter or in the comments below.

The post GFN Thursday: Dashing Into December With RTX 3080 Memberships and 20 New Games appeared first on The Official NVIDIA Blog.

Read More

Fotokite’s Autonomous Drone Gives Firefighters an Eye in the Sky

First responders don’t have time on their side.

Whether fires, search-and-rescue missions or medical emergencies, their challenges are usually dangerous and time-sensitive.

Using autonomous technology, Zurich-based Fotokite is developing a system to help first responders save lives and increase public safety.

Fotokite Sigma is a fully autonomous tethered drone, built with the NVIDIA Jetson platform, that drastically improves the situational awareness for first responders, who would otherwise have to rely on manned helicopters to get an aerial perspective.

Tethered to a ground station, either in a transportable case or attached to an emergency vehicle, Fotokite Sigma requires no skilled drone pilot, taking no one away from the scene.

Supported by the compute power of the NVIDIA Jetson platform in the grounded base, Fotokite Sigma covers the vast majority of situations where first responders need an aerial perspective during an emergency. Whether it’s an aerial search for someone off the side of a road, a quick look at a rooftop for hotspots or getting eyes above an active fire to track progress and plan resources, Sigma employs computer vision to send information directly to a tablet, with photogrammetry capabilities and real-time situational awareness.

Fotokite is a member of NVIDIA Inception, a program that offers go-to-market support, expertise and technology assistance for startups working in AI, data science and high performance computing.

Fighting Fire With Data

Firefighters depend on accurate, timely information to help them make important situational decisions.

Fotokite Sigma’s thermal camera can determine where a fire is, as well as where the safest location to enter or exit a structure would be. It can highlight hotspots that need attention and guide firefighters on whether their water is hitting the right areas, even through heavy smoke or with limited visibility at night.

Once the fire is under control, Sigma can monitor the area for potential flare-ups, so firefighters can prioritize resources to act quickly and efficiently.

“Everything from autonomous flight and real-time data delivery to the user interface and real-time streaming is made as simple as pushing a button, which means first responders can focus on saving lives and keeping people safe,” said Chris McCall, CEO of Fotokite.

Fire departments across the U.S. and Europe are using Fotokite Sigma, in both major cities and rural areas.

“The next area of focus for us is increasing the situational awareness and decision-making power in an emergency situation,” said McCall. “Using NVIDIA technology, we can easily introduce new capabilities to our systems.”

In addition to rolling out availability of Sigma across more geographies, Fotokite is working with partners to deliver data in real time, something that might have previously taken several hours to accomplish.

Providing a 3D render of an active emergency situation, tracking first responders, and supplying other intelligent data layers, for example, could be invaluable to first responders, helping them visualize a scene as it unfolds.

Learn more about how NVIDIA partners Lockheed Martin and OroraTech are using accelerated computing technology to fight wildfires.  

Learn more about NVIDIA Inception and the NVIDIA Jetson platform. Watch public sector sessions from GTC on demand. 

The post Fotokite’s Autonomous Drone Gives Firefighters an Eye in the Sky appeared first on The Official NVIDIA Blog.

Read More

Cloud Service, OEMs Raise the Bar on AI Training with NVIDIA AI

Look who just set new speed records for training AI models fast: Dell Technologies, Inspur, Supermicro and — in its debut on the MLPerf benchmarks — Azure, all using NVIDIA AI.

Our platform set records across all eight popular workloads in the MLPerf training 1.1 results announced today.

MLPerf 1.1 results for training at scale
NVIDIA AI trained all models faster than any alternative in the latest round.

NVIDIA A100 Tensor Core GPUs delivered the best normalized per-chip performance. They scaled with NVIDIA InfiniBand networking and our software stack to deliver the fastest time to train on Selene, our in-house AI supercomputer based on the modular NVIDIA DGX SuperPOD.

MLPerf 1.1 training per-chip results
NVIDIA A100 GPUs delivered the best per-chip training performance in all eight MLPerf 1.1 tests.

A Cloud Sails to the Top

When it comes to training AI models, Azure’s NDm A100 v4 instance is the fastest on the planet, according to the latest results. It ran every test in the latest round and scaled up to 2,048 A100 GPUs.

Azure showed not only great performance, but great performance that’s available for anyone to rent and use today, in six regions across the U.S.

AI training is a big job that requires big iron. And we want users to train models at record speed with the service or system of their choice.

That’s why we’re enabling NVIDIA AI with products for cloud services, co-location services, corporations and scientific computing centers, too.

Server Makers Flex Their Muscles

Among OEMs, Inspur set the most records in single-node performance with its eight-way GPU systems, the NF5688M6 and the liquid-cooled NF5488A5. Dell and Supermicro set records on four-way A100 GPU systems.

A total of 10 NVIDIA partners submitted results in the round, eight OEMs and two cloud-service providers. They made up more than 90 percent of all submissions.

This is the fifth and strongest showing to date for the NVIDIA ecosystem in training tests from MLPerf.

Our partners do this work because they know MLPerf is the only industry-standard, peer-reviewed benchmark for AI training and inference. It’s a valuable tool for customers evaluating AI platforms and vendors.

Servers Certified for Speed

Baidu PaddlePaddle, Dell Technologies, Fujitsu, GIGABYTE, Hewlett Packard Enterprise, Inspur, Lenovo and Supermicro submitted results in local data centers, running jobs on both single and multiple nodes.

Nearly all our OEM partners ran tests on NVIDIA-Certified Systems, servers we validate for enterprise customers who want accelerated computing.

The range of submissions shows the breadth and maturity of an NVIDIA platform that provides optimal solutions for businesses working at any scale.

Both Fast and Flexible

NVIDIA AI was the only platform participants used to make submissions across all benchmarks and use cases, demonstrating versatility as well as high performance. Systems that are both fast and flexible provide the productivity customers need to speed their work.

The training benchmarks cover eight of today’s most popular AI workloads and scenarios — computer vision, natural language processing, recommendation systems, reinforcement learning and more.

MLPerf’s tests are transparent and objective, so users can rely on the results to make informed buying decisions. The industry benchmarking group, formed in May 2018, is backed by dozens of industry leaders including Alibaba, Arm, Google, Intel and NVIDIA.

20x Speedups in Three Years

Looking back, the numbers show performance gains on our A100 GPUs of over 5x in just the last 18 months. That’s thanks to continuous innovations in software, the lion’s share of our work these days.

NVIDIA’s performance has increased more than 20x since the MLPerf tests debuted three years ago. That massive speedup is a result of the advances we make across our full-stack offering of GPUs, networks, systems and software.

MLPerf training 20x improvements over three years
NVIDIA AI delivers more than 20x improvements over three years.

Constantly Improving Software

Our latest advances came from multiple software improvements.

For example, using a new class of memory copy operations, we achieved 2.5x faster operations on the 3D-UNet benchmark for medical imaging.

Thanks to ways you can fine-tune GPUs for parallel processing, we realized a 10 percent speed up on the Mask R-CNN test for object detection and a 27 percent boost for recommender systems. We simply overlapped independent operations, a technique that’s especially powerful for jobs that run across many GPUs.

We expanded our use of CUDA graphs to minimize communication with the host CPU. That brought a 6 percent performance gain on the ResNet-50 benchmark for image classification.

And we implemented two new techniques on NCCL, our library that optimizes communications among GPUs. That accelerated results up to 5 percent on large language models like BERT.

Leverage Our Hard Work

All the software we used is available from the MLPerf repository, so everyone can get our world-class results. We continuously fold these optimizations into containers available on NGC, our software hub for GPU applications.

It’s part of a full-stack platform, proven in the latest industry benchmarks, and available from a variety of partners to tackle real AI jobs today.

The post Cloud Service, OEMs Raise the Bar on AI Training with NVIDIA AI appeared first on The Official NVIDIA Blog.

Read More

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Leonardo da Vinci’s portrait of Jesus, known as Salvator Mundi, was sold at a British auction for nearly half a billion dollars in 2017, making it the most expensive painting ever to change hands.

However, even art history experts were skeptical about whether the work was an original of the master rather than one of his many protégés.

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to determine that this painting was likely an authentic da Vinci.

He spoke with NVIDIA AI Podcast host Noah Kravitz about working with his wife, Andrea Frank, a professional curator of art images, to authenticate artistic masterpieces with AI’s help.

Key Points From This Episode:

  • Authenticating art is a great challenge, as the characteristics of a painting that distinguish one artist’s work from another’s are very subtle. Determining if a piece is authentic requires an extremely fine analysis of a painting’s highly detailed variants.
  • Using large datasets, the Franks trained convolutional neural networks to examine small, manageable segments of masterpieces to analyze and classify their artists’ patterns, down to their brush strokes. The model determined that the Salvator Mundi painting sold five years ago is likely the real work of da Vinci.

Tweetables:

AI might sometimes “be wrong, but it will always be objective, if you train it properly.” — Steven Frank [10:48]

“The most fascinating thing about AI research these days is that you can do cutting-edge AI research on an inexpensive PC … as long as it has an NVIDIA GPU.” — Steven Frank [22:43]

You Might Also Like:

Researchers Chris Downum and Leszek Pawlowicz Use Deep Learning to Accelerate Archaeology

Researchers in the Department of Anthropology at Northern Arizona University are using GPU-based deep learning algorithms to categorize sherds — tiny fragments of ancient pottery.

Wild Things: NVIDIA’s Sifei Liu Talks 3D Reconstructions of Endangered Species

Endangered species can be challenging to study, as they are elusive and the very act of observing them can disrupt their lives. Now, scientists can take a closer look at endangered species by studying AI-generated 3D representations of them.

Metaspectral’s‌ ‌Migel‌ ‌Tissera‌ ‌on‌ ‌AI-Based‌ ‌Data‌ ‌Management‌

‌Moondust‌,‌ ‌minerals‌ ‌and‌ ‌soil‌ ‌types‌ ‌are‌ ‌some‌ ‌of‌ ‌the‌ ‌materials‌ ‌that‌ ‌can‌ ‌be‌ ‌quickly‌ ‌identified‌ ‌and‌ ‌analyzed‌ ‌with‌ ‌AI‌,‌‌ ‌based‌ ‌on‌ their ‌images‌.‌ ‌Migel‌ ‌Tissera‌ ‌is‌ ‌co-founder‌ ‌and‌ ‌CTO‌ ‌of‌ ‌Metaspectral,‌ ‌a‌ ‌Vancouver-based‌ ‌startup‌ ‌that‌ ‌provides‌ ‌an‌ ‌AI-based‌ ‌data‌ ‌management‌ ‌and‌ ‌analysis‌ ‌platform‌ ‌for‌ ‌ultra-high-resolution‌ ‌images.‌ ‌

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art appeared first on The Official NVIDIA Blog.

Read More

If I Had a Hammer: Purdue’s Anvil Supercomputer Will See Use All Over the Land

Carol Song is opening a door for researchers to advance science on Anvil, Purdue University’s new AI-ready supercomputer, an opportunity she couldn’t have imagined as a teenager in China.

“I grew up in a tumultuous time when, unless you had unusual circumstances, the only option for high school grads was to work alongside farmers or factory workers, then suddenly I was told I could go to college,” said Song, now the project director of Anvil.

And not just any college. Her scores on a national entrance exam opened the door to Tsinghua University, home to China’s most prestigious engineering school.

Along the way, someone told her computers would be big, so she signed up for computer science before she had ever seen a computer. She learned soon enough.

“We were building hardware from the ground up, designing microinstructions and logic circuits, so I got to understand computers from the inside out,” she said.

Easing Access to Supercomputers

Skip forward a few years to grad school at the University of Illinois when another big door opened.

While working in distributed systems, she was hired as one of the first programmers at the National Center for Supercomputing Applications,  one of the first sites in a U.S. program funding supercomputers that researchers shared.

To make the systems more accessible, she helped develop alternatives to the crude editing tools of the day that displayed one line of a program at a time. And she helped pioneering researchers like Michael Norman create visualizations of their work.

GPUs Add AI to HPC

In 2005, she joined Purdue, where she has helped manage nearly three dozen research projects representing more than $60 million in grants as a senior research scientist in the university’s supercomputing center.

“All that helped when we started defining Anvil. I see researchers’ pain points when they are getting on a new system,” said Song.

Anvil links 1,000 Dell EMC PowerEdge C6525 server nodes with 2,000 of the latest AMD x86 CPUs and 64 NVIDIA A100 Tensor Core GPUs on a NVIDIA Quantum InfiniBand network to handle traditional HPC and new AI workloads.

The system, built by Dell Technologies, will deliver 5.3 petaflops and half a million GPU cycles per year to tens of thousands of researchers across the U.S. working on the National Science Foundation’s XSEDE network.

Anvil Forges Desktop, Cloud Links

To harness that power, Anvil supports interactive user interfaces as well as the batch jobs that are traditional in high performance computing.

“Researchers can use their favorite tools like Jupyter notebooks and remote desktop interfaces so the cluster can look just like in their daily work environment,” she said.

Anvil will also support links to Microsoft Azure, so researchers can access its large datasets and commercial cloud-computing muscle. “It’s an innovative part of this system that will let researchers experiment with creating workflows that span research and commercial environments,” Song said.

Fighting COVID, Exploring AI

More than 30 research teams have already signed up to be early users of Anvil.

One team will apply deep learning to medical images to improve diagnosis of respiratory diseases including COVID-19. Another will build causal and logical check points into neural networks to explore why deep learning delivers excellent results.

“We’ll support a lot of GPU-specific tools like NGC containers for accelerated applications, and as with every new system, users can ask for additional toolkits and libraries they want,” she said.

The Anvil team aims to invite industry collaborations to test new ideas using up to 10 percent of the system’s capacity. “It’s a discretionary use we want to apply strategically to enable projects that wouldn’t happen without such resources,” she said.

Opening Doors for Science and Inclusion

Early users are working on Anvil today and the system will be available for all users in about a month.

Anvil’s opening day has a special significance for Song, one of the few women to act as a lead manager for a national supercomputer site.

Carol Song. project director, Purdue Anvil supercomputer
Carol Song and Purdue’s Anvil supercomputer

“I’ve been fortunate to be in environments where I’ve always been encouraged to do my best and given opportunities,” she said.

“Around the industry and the research computing community there still aren’t a lot of women in leadership roles, so it’s an ongoing effort and there’s a lot of room to do better, but I’m also very enthusiastic about mentoring women to help them get into this field,” she added.

Purdue’s research computing group shares Song’s enthusiasm about getting women into supercomputing. It’s home to one of the first chapters of the international Women in High-Performance Computing organization.

Purdue’s Women in HPC chapter sent an all-female team to a student cluster competition at SC18. It also hosts outside speakers, provides travel support to attend conferences and connects students and early career professionals to experienced mentors like Song.

Pictured at top: Carol Song, Anvil’s principal investigator (PI) and project director along with Anvil co-PIs (from left) Rajesh Kalyanam, Xiao Zhu and Preston Smith. 

The post If I Had a Hammer: Purdue’s Anvil Supercomputer Will See Use All Over the Land appeared first on The Official NVIDIA Blog.

Read More

NVIDIA AI Enterprise Helps Researchers, Hospitals Targeting Cancer Hit the Mark

Whether facilitating cancer screenings, cutting down on false positives, or improving tumor identification and treatment planning, AI is a powerful agent for healthcare innovation and acceleration.

Yet, despite its promise, integrating AI into actual solutions can challenge many IT organizations.

The Netherlands Cancer Institute (NKI), one of the world’s top-rated cancer research and treatment centers, is using the NVIDIA AI Enterprise software suite to test AI workloads on higher-precision 3D cancer scans than are commonly used today.

NKI’s AI model was previously trained on lower-resolution images. But with the higher memory capacity offered by NVIDIA AI Enterprise, its researchers could instead use high-resolution images for training. This improvement helps clinicians better target the size and location of a tumor every time a patient receives treatment.

The NVIDIA AI Enterprise suite that NKI deployed is designed to optimize the development and deployment of AI. It’s certified and supported by NVIDIA to enable hospitals, researchers and IT professionals to run AI workloads on mainstream servers with VMware vSphere in their on-prem data centers and private clouds.

Delivering treatments on virtualized infrastructure means hospitals and research institutions can use the same tools they already work with on existing applications. This helps maximize their investments while making innovations in care more affordable and accessible.

NKI used an AI model to better reconstruct a Cone Beam Computed Tomography (CBCT) thoracic image, resulting in clearer image quality compared to conventional methods.

Speeding Breakthroughs in Healthcare Research 

NKI had gotten off to a quick start with its project on NVIDIA AI Enterprise by using NVIDIA LaunchPad.

The LaunchPad program provides immediate access to optimized software running on accelerated infrastructure to help customers prototype and test data science and AI workloads. This month, the program was extended to nine Equinix locations worldwide.

The NVIDIA AI Enterprise software suite, available in LaunchPad, makes it possible to run advanced AI workloads on mainstream accelerated servers with VMware vSphere, including systems from Dell Technologies, Hewlett Packard Enterprise, Lenovo and many others.

Rhino Health, a federated learning platform powered by NVIDIA FLARE, is available today through NVIDIA AI Enterprise, making it easy for any hospital to leverage Federated learning for AI development and validation. Other organizations, like The American College of Radiology’s AI LAB, are also planning to use the NVIDIA AI Enterprise software.

Researchers at NKI used NVIDIA AI Enterprise, running on the HPE Synergy, a composable software system from Hewlett Packard Enterprise, to build deep learning models by combining the massive 2D and 3D data sources and AI to pinpoint the location of tumors before each radiotherapy treatment session. 

“Doctors could use this solution as an alternative to CT scans on day of treatment to optimize the treatment plan to validate the radiotherapy plan,” said Jonas Teuwen, group leader at the Netherlands Cancer Institute.

Using NVIDIA AI Enterprise, Teuwen’s team in Amsterdam ran their workloads on NVIDIA A100 80GB GPUs in a server hosted in Silicon Valley. Their convolutional neural network was built in less than three months and was trained on less than 300 clinical lung CT scans that were then reconstructed and generalized to head and neck data.

In the future, NKI researchers also hope to translate this work to potential use cases in interventional radiology to repair arteries in cardiac surgeries and dental surgery implants.

Accelerating Hospital AI Deployment With NVIDIA AI Enterprise

NVIDIA AI Enterprise simplifies the AI rollout experience for organizations who host a variety of healthcare and operations applications on virtualized infrastructure. It enables IT administrators to run AI applications like Vyasa and iCAD alongside core hospital applications, streamlining the workflow in an environment they’re already familiar with.

Compute resources can be adjusted with just a few clicks, giving hospitals the ability to transform care for both patients and healthcare providers.

Vyasa, a provider of deep learning analysis tools for healthcare and life sciences, uses NVIDIA AI Enterprise to build applications that can search unstructured content such as patient care records. With the software, Vyasa can develop their deep learning applications faster and help dive through unstructured data and PDFs to assess which patients are at a higher risk. It identifies those who haven’t been in for a check-up in more than a year, and can refine for additional risk factors like age and race.

“NVIDIA AI Enterprise has reduced our deployment times by half thanks to rapid provisioning of platform requirements that eliminate the need to manually download and integrate software packages,” said Frans Lawaetz, CIO at Vyasa. 

Radiologists use iCAD’s innovative ProFound AI software to assist with reading mammograms. These AI solutions help identify cancer earlier, categorize breast density, and accurately assess short-term personalized breast cancer risk based on each woman’s screening mammogram. Running advanced workloads with VMware vSphere is important for iCAD’s healthcare customers as they can easily integrate their data intensive applications into any hospital infrastructure.

A host of other software makers, like the American College of Radiology’s AI LAB and Rhino Health, with its federated learning platform, have begun validating their software on NVIDIA AI Enterprise to ease deployment by integrating into a common healthcare IT infrastructure.

The ability for NVIDIA AI Enterprise to unify the data center for healthcare organizations has sparked the creation of an ecosystem with NVIDIA technology at its heart. The common NVIDIA and VMware infrastructure benefits software vendors and healthcare organizations alike by making the deployment and management of these solutions much easier.

For many healthcare IT and software companies, integrating AI into hospital environments is a top priority. Many NVIDIA Inception partners will be testing the ease of deploying their offerings on NVIDIA AI Enterprise in these types of environments. They include ​​Aidence, Arterys, contextflow, ImageBiopsy Lab, InformAI, MD.ai, methinks.ai, RADLogics, Sciberia, Subtle Medical and VUNO.

NVIDIA Inception is a program that offers go-to-market support, expertise and technology for AI, data science and HPC startups.

Qualified enterprises can apply to experience NVIDIA AI Enterprise in curated, no-cost labs offered on NVIDIA LaunchPad.

Hear more about NVIDIA’s work in healthcare by tuning in to my special address on Nov. 29 at RSNA, the Radiological Society of North America’s annual meeting.

Main image shows how NVIDIA AI Enterprise allows hospital IT administrators to run AI applications alongside core hospital applications, like iCAD Profound AI Software for mammograms.

The post NVIDIA AI Enterprise Helps Researchers, Hospitals Targeting Cancer Hit the Mark appeared first on The Official NVIDIA Blog.

Read More

Federated Learning With FLARE: NVIDIA Brings Collaborative AI to Healthcare and Beyond

NVIDIA is making it easier than ever for researchers to harness federated learning by open-sourcing NVIDIA FLARE, a software development kit that helps distributed parties collaborate to develop more generalizable AI models.

Federated learning is a privacy-preserving technique that’s particularly beneficial in cases where data is sparse, confidential or lacks diversity. But it’s also useful for large datasets, which can be biased by an organization’s data collection methods, or by patient or customer demographics.

NVIDIA FLARE — short for Federated Learning Application Runtime Environment — is the engine underlying NVIDIA Clara Train’s federated learning software, which has been used for AI applications in medical imaging, genetic analysis, oncology and COVID-19 research. The SDK allows researchers and data scientists to adapt their existing machine learning and deep learning workflows to a distributed paradigm.

Making NVIDIA FLARE open source will better empower cutting-edge AI in almost any industry by giving researchers and platform developers more tools to customize their federated learning solutions.

With the SDK, researchers can choose among different federated learning architectures, tailoring their approach for domain-specific applications. And platform developers can use NVIDIA FLARE to provide customers with the distributed infrastructure required to build a multi-party collaboration application.

Flexible Federated Learning Workflows for Multiple Industries 

Federated learning participants work together to train or evaluate AI models without having to pool or exchange each group’s proprietary datasets. NVIDIA FLARE provides different distributed architectures that accomplish this, including peer-to-peer, cyclic and server-client approaches, among others.

Using the server-client technique, where learned model parameters from each participant are sent to a common server and aggregated into a global model, NVIDIA has led federated learning projects that help segment pancreatic tumors, classify breast density in mammograms to inform breast cancer risk, and predict oxygen needs for COVID patients.

The server-client architecture was also used for two federated learning collaborations using NVIDIA FLARE: NVIDIA worked with Roche Digital Pathology researchers to run a successful internal simulation using whole slide images for classification, and with Netherlands-based  Erasmus Medical Center for an AI application that identifies genetic variants associated with schizophrenia cases.

But not every federated learning application is suited to the server-client approach. By supporting additional architectures, NVIDIA FLARE will make federated learning accessible to a wider range of applications. Potential use cases include helping energy companies analyze seismic and wellbore data, manufacturers optimize factory operations and financial firms improve fraud detection models.

NVIDIA FLARE Integrates With Healthcare AI Platforms

NVIDIA FLARE can integrate with existing AI initiatives, including the open-source MONAI framework for medical imaging.

“Open-sourcing NVIDIA FLARE to accelerate federated learning research is especially important in the healthcare sector, where access to multi-institutional datasets is crucial, yet concerns around patient privacy can limit the ability to share data,” said Dr. Jayashree Kalapathy, associate professor of radiology at Harvard Medical School and leader of the MONAI community’s federated learning working group. “We are excited to contribute to NVIDIA FLARE and continue the integration with MONAI to push the frontiers of medical imaging research.”

NVIDIA FLARE will also be used to power federated learning solutions at: 

  • American College of Radiology (ACR): The medical society has worked with NVIDIA on federated learning studies that apply AI to radiology images for breast cancer and COVID-19 applications. It plans to distribute NVIDIA FLARE in the ACR AI-LAB, a software platform that is available to the society’s tens of thousands of members.
  • Flywheel: The company’s Flywheel Exchange platform enables users to access and share data and algorithms for biomedical research, manage federated projects for analysis and training, and choose their preferred federated learning solution — including NVIDIA FLARE.
  • Taiwan Web Service Corporation: The company offers a GPU-powered MLOps platform that enables customers to run federated learning based on NVIDIA FLARE. Five medical imaging projects are currently being conducted on the company’s private cluster, each with several participating hospitals.
  • Rhino Health: The partner and member of the NVIDIA Inception program has integrated NVIDIA FLARE into its federated learning solution, which is helping researchers at Massachusetts General Hospital develop an AI model that more accurately diagnoses brain aneurysms, and experts at the National Cancer Institute’s Early Detection Research Network develop and validate medical imaging AI models that identify early signs of pancreatic cancer.

“To collaborate effectively and efficiently, healthcare researchers need a common platform for AI development without the risk of breaching patient privacy,” said Dr. Ittai Dayan, founder of Rhino Health. “Rhino Health’s ‘Federated Learning as a Platform’ solution, built with NVIDIA FLARE, will be a useful tool to help accelerate the impact of healthcare AI.”

Get started with federated learning by downloading NVIDIA FLARE. Hear more about NVIDIA’s work in healthcare by tuning in to a special address on Nov. 29 at 6 p.m. CT by David Niewolny, director of healthcare business development at NVIDIA, at RSNA, the Radiological Society of North America’s annual meeting.

Subscribe to NVIDIA healthcare news here

The post Federated Learning With FLARE: NVIDIA Brings Collaborative AI to Healthcare and Beyond appeared first on The Official NVIDIA Blog.

Read More