Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse

Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse

The line between the physical and virtual worlds is blurring as autonomous vehicle simulation sharpens with NVIDIA Omniverse, our photorealistic 3D simulation and collaboration platform.

During the GPU Technology Conference keynote, NVIDIA founder and CEO Jensen Huang showcased for the first time NVIDIA DRIVE Sim running on NVIDIA Omniverse. DRIVE Sim leverages the cutting-edge capabilities of the platform for end-to-end, physically accurate autonomous vehicle simulation.

Omniverse was architected from the ground up to support multi-GPU, large-scale, multisensor simulation for autonomous machines. It enables ray-traced, physically accurate, real-time sensor simulation with NVIDIA RTX technology.

The video shows a digital twin of a Mercedes-Benz EQS driving a 17-mile route around a recreated version of the NVIDIA campus in Santa Clara, Calif. It includes Highways 101 and 87 and Interstate 280, with traffic lights, on-ramps, off-ramps and merges as well as changes to the time of day, weather and traffic.

To achieve the real-world replica of the testing loop, the real environment was scanned at 5-cm accuracy and recreated in simulation. The hardware, software, sensors, car displays and human-machine interaction were all implemented in simulation in the exact same way as the real world, enabling bit- and timing-accurate simulation.

Physically Accurate Sensor Simulation 

Autonomous vehicle simulation requires accurate physics and light modeling. This is especially critical for simulating sensors, which requires modeling rays beyond the visible spectrum and accurate timing between the sensor scan and environment changes.

Ray tracing is perfectly suited for this, providing realistic lighting by simulating the physical properties of light. And the Omniverse RTX renderer coupled with NVIDIA RTX GPUs enables ray tracing at real-time frame rates.

The capability to simulate light in real time has significant benefits for autonomous vehicle simulation. In the video, the vehicles show complex reflections of objects in the scene — including those not directly in the frame, just as it would in the real world. This also applies to other reflective surfaces such as wet roadways, reflective signs and buildings.

The Mercedes EQS shows the complexity of reflections enabled with ray tracing, including reflections of objects that are in the scene, but not in the frame.

RTX also enables high-fidelity shadows. Typically in virtual environments, shadows are pre-computed or pre-baked. However, to provide a dynamic environment for simulation, pre-baking isn’t possible. RTX enables high-fidelity shadows to be computed at run-time. In the night parking example from the video, the shadows from the lights are rendered directly instead of being pre-baked. This leads to shadows that appear softer and are much more accurate.

Nighttime parking scenarios show the benefit of ray tracing for complex shadows generated by dynamic light sources.

Universal Scene Description

DRIVE Sim is based on Universal Scene Description, an open framework developed by Pixar to build and collaborate on 3D content for virtual worlds.

USD provides a high level of abstraction to describe scenes in DRIVE Sim. For instance, USD makes it easy to define the state of the vehicle (position, velocity, acceleration) and trigger changes based on its proximity to other entities such as a landmark in the scene.

Also, the framework comes with a rich toolset and is supported by most major content creation tools.

Scalability and Repeatability

Most applications for generating virtual environments are targeted to systems with one to two GPUs, such as PC games. While the timing and latency of such architectures may be good enough for consumer games, designing a repeatable simulator for autonomous vehicles requires a much higher level of precision and performance.

Omniverse enables DRIVE Sim to simultaneously simulate multiple cameras, radars and lidars in real time, supporting sensor configurations from Level 2 assisted driving to Level 4 and Level 5 fully autonomous driving.

Together, these new capabilities brought to life by Omniverse deliver a simulation experience that is virtually indistinguishable from reality.

Watch NVIDIA CEO Jensen Huang recap all the news from GTC: 

 

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics

Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics

On land, underwater, in the air — even underground and on other planets — new autonomous machines and the applications that run on them are emerging daily.

Robots are working on construction sites to improve safety, they’re on factory floors to enhance logistics and they’re roaming farm rows to pick weeds and harvest crops.

As AI-powered autonomous machines proliferate, a new generation of students and developers will play a critical role in teaching and training these robots how to behave in the real world.

To help people get started, we’ve announced the availability of free online training and AI-certification programs. Aptly timed with World Teacher’s Day, these resources open up the immense potential of AI and robotics teaching and learning.

And there’s no better way to get hands-on learning and experience than with the new Jetson Nano 2GB Developer Kit, priced at just $59. NVIDIA CEO Jensen Huang announced this ultimate starter AI computer during the GPU Technology Conference on Monday. Incredibly affordable, the Jetson Nano 2GB helps make AI accessible to everyone.

New AI Certification Programs for Teachers and Students

NVIDIA offers two AI certification tracks to educators, students and engineers looking to reskill. Both are part of the NVIDIA Deep Learning Institute:

  • NVIDIA Jetson AI Specialist: This certification can be completed by anyone and recognizes competency in Jetson and AI using a hands-on, project-based assessment. This track is meant for engineers looking to reskill and advanced learners to build on their knowledge.
  • NVIDIA Jetson AI Ambassador: This certification is for educators and leaders at robotics institutions. It recognizes competency in teaching AI on Jetson using a project-based assessment and an interview with the NVIDIA team. This track is ideal for educators or instructors to get fully prepared to teach AI to students.

Additionally, the Duckietown Foundation is offering a free edX course on AI and robotics based on the new NVIDIA Jetson Nano 2GB Developer Kit.

“NVIDIA’s Jetson AI certification materials thoroughly cover the fundamentals with the added advantage of hands-on project-based learning,” said Jack Silberman, Ph.D., lecturer at UC San Diego, Jacobs School of Engineering, Contextual Robotics Institute. “I believe these benefits provide a great foundation for students to prepare for university robotics courses and compete in robotics competitions.”

“We know how important it is to provide all students with opportunities to impact the future of technology,” added Christine Nguyen, STEM curriculum director at the Boys & Girls Club of Western Pennsylvania. “We’re excited to utilize the NVIDIA Jetson AI Specialist certification materials with our students as they work towards being leaders in the fields of AI and robotics.”

“Acquiring new technical skills with a hands-on approach to AI learning becomes critical as AIoT drives the demand for interconnected devices and increasingly complex industrial applications,” said Matthew Tarascio, vice president of Artificial Intelligence at Lockheed Martin. “We’ve used the NVIDIA Jetson platform as part of our ongoing efforts to train and prepare our global workforce for the AI revolution.”

By making it easy to “teach the teachers” with hands-on AI learning and experimentation, Jetson is enabling a new generation to build a smarter, safer AI-enabled future.

Watch NVIDIA CEO Jensen Huang recap autonomous machines news at GTC:

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics appeared first on The Official NVIDIA Blog.

Read More

Hands-On AI: Duckietown Foundation Offering Free edX Robotics Course Powered by NVIDIA Jetson Nano 2GB

Hands-On AI: Duckietown Foundation Offering Free edX Robotics Course Powered by NVIDIA Jetson Nano 2GB

For many, the portal into AI is robotics. And one of the best ways to get good at robotics is to get hands on.

Roll up your sleeves, because this week at NVIDIA’s GPU Technology Conference, the Duckietown Foundation announced that it’s offering a free edX course on AI and robotics using the Duckiebot hardware platform powered by the new NVIDIA Jetson Nano 2GB Developer Kit.

The Duckietown project, which started as an MIT class in 2016, has evolved into an open-source platform for robotics and AI education, research and outreach. The project is coordinated by the Duckietown Foundation, whose mission is to reach and teach a wide audience of students about robotics and AI.

It does this through hands-on learning activities in which students put AI and robotics components together to address modern autonomy challenges for self-driving cars. Solutions are implemented in the Duckietown robotics ecosystem, where the interplay among theory, algorithms and deployment on real robots is witnessed firsthand in a model urban environment.

NVIDIA Jetson Nano 2GB Developer KitThe Jetson Nano 2GB Developer Kit has the performance and capability to run a diverse set of AI models and frameworks. This makes it the ultimate AI starter computer for learning and creating AI applications.

The new devkit is the latest offering in the NVIDIA Jetson AI at the Edge platform, which ranges from entry-level AI devices to advanced platforms for fully autonomous machines. To help people get started with robotics, NVIDIA also announced the availability of free online training and AI-certification programs.

“The Duckietown educational platform provides a hands-on, scaled down, accessible version of real world autonomous systems,” said Emilio Frazzoli, professor of Dynamic Systems and Control at ETH Zurich and advisor for the Duckietown Foundation. “Integrating NVIDIA’s Jetson Nano power in Duckietown enables unprecedented, affordable access to state-of-the-art compute solutions for learning autonomy.”

Another highlight of the course is the Duckietown Autolab remote infrastructure, which enables remote evaluation of robotic agents elaborated by learners with Duckiebot robots at home, providing feedback on assignments. This lets the course provide a realistic development flow with real hardware evaluation.

Duckiebot powered by Jetson Nano 2GB
Duckiebot powered by Jetson Nano 2GB.

Enrollment is now open for the free edX course, called “Self-Driving Cars with Duckietown,” which starts in February. To find out more about the technical specifications of the new NVIDIA powered Duckiebot or to pre-order, check out the Duckietown’s Store.

The AI Driving Olympics

For more advanced students, or for people who just want to witness the fun, Duckietown has created the “AI Driving Olympics” (AI-DO) competition. It focuses on autonomous vehicles with the objective of evaluating the state of the art in embodied AI, by benchmarking novel machine learning approaches to autonomy in a set of fun challenges.

AI-DO is made up of a series of increasingly complex tasks — from simple lane-following to fleet management. For each, competitors use various resources, such as simulation, logs, code templates, baseline implementations, and standardized physical autonomous Duckiebots operating in Duckietown, a formally defined urban environment.

Submissions are evaluated in simulation on the cloud, physically in remote Duckietown Autolabs, and running on actual Duckiebots at the live finals competition.

Participants can participate remotely at any stage of the competition. They just need to send their source code packaged as a Docker image. Team members will then be able to use Duckiebot’s “Autolabs,” which are facilities that allow remote experimentation in reproducible settings.

The next AI-DO race will be at NeurIPS, Dec. 6-12.

Duckietown classes and labs are offered at 80+ universities, including ETH Zürich and Université de Montréal. Curriculum materials for undergraduate and graduate courses are available open source. This includes weekly lecture plans, open source software, and a modular, do-it-yourself hardware smart city environment with autonomous driving car kits.

Watch NVIDIA CEO Jensen Huang recap all the autonomous machines news announced at GTC:

The post Hands-On AI: Duckietown Foundation Offering Free edX Robotics Course Powered by NVIDIA Jetson Nano 2GB appeared first on The Official NVIDIA Blog.

Read More

Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC

Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC

The NVIDIA DRIVE ecosystem is going multidimensional.

During the NVIDIA GPU Technology Conference this week, autonomous trucking startup Locomation and simulation company Blackshark.ai announced technological developments powered by NVIDIA DRIVE.

Locomation, a Pittsburgh-based provider of autonomous trucking technology, said it would integrate NVIDIA DRIVE AGX Orin in the upcoming rollout of its platooning system on public roads in 2022.

Innovating in the virtual world, Blackshark.ai detailed its toolset to create buildings and landscape assets for simulation environments on NVIDIA DRIVE Sim.

Together, these announcements mark milestones in the path toward safer, more efficient autonomous transportation.

Shooting for the Platoon

Locomation recently announced its first commercial system, Autonomous Relay Convoy, which allows one driver to pilot a lead truck while a fully autonomous follower truck operates in tandem.

The ARC system will be deployed with Wilson Logistics, which will operate more than 1,000 Locomation-equipped trucks, powered by NVIDIA DRIVE AGX Orin, starting in 2022.

NVIDIA DRIVE AGX Orin is a highly advanced software-defined platform for autonomous vehicles.  The system features the new Orin system-on-a-chip, which delivers more than 200 trillion operations per second — nearly 7x the performance of NVIDIA’s previous-generation Xavier SoC.

In August, Locomation and Wilson Logistics successfully completed the first-ever on-road pilot program transporting commercial freight using ARC. Two Locomation trucks, hauling Wilson Logistics trailers and freight, were deployed on a 420-mile long route along I-84 between Portland, Ore., and Nampa, Idaho. This stretch of interstate has some of the most challenging road conditions for truck driving, with curvatures, inclines and wind gusts.

“We’re moving rapidly toward autonomous trucking commercialization, and NVIDIA DRIVE presents a solution for providing a robust, safety-forward platform for our team to work with,” said Çetin Meriçli, CEO and cofounder of Locomation.

Constructing a New Dimension

While Locomation is deploying autonomous vehicles in the real world, Blackshark.ai is making it easier to create building and landscape assets used to enhance the virtual world on a global scale.

The startup has developed a digital twin platform that uses AI and cloud computing to automatically transform satellite data, aerial images or map and sensor data into building, landscape and infrastructure assets that contribute to a semantic photorealistic 3D environment.

During the opening GTC keynote, NVIDIA founder and CEO Jensen Huang showcased the technology on NVIDIA DRIVE Sim. DRIVE Sim uses high-fidelity simulation to create a safe, scalable and cost-effective way to bring self-driving vehicles to our roads.

It taps into the computing horsepower of NVIDIA RTX GPUs to deliver a powerful, scalable, cloud-based computing platform. One that is capable of generating billions of qualified miles for autonomous vehicle testing.

In the demo video, Blackshark’s AI automatically generated the trees and buildings used to reconstruct the city of San Jose in simulation for an immersive, authentic environment.

These latest announcements from Locomation and Blackshark.ai demonstrate the breadth of the DRIVE ecosystem, spanning the real and virtual worlds to push autonomous innovation further.

Watch NVIDIA CEO Jensen Huang recap all the news from GTC. It’s not too late to get access to hundreds of live and on-demand talks — register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC appeared first on The Official NVIDIA Blog.

Read More

Swede-sational: Linköping University to Build Country’s Fastest AI Supercomputer

Swede-sational: Linköping University to Build Country’s Fastest AI Supercomputer

The land famed for its midsummer festivities and everyone’s favorite flatpack furniture store is about to add another jewel to its crown.

Linköping University, home to 36,000 staff and students, has announced its plans to build Sweden’s fastest AI supercomputer, based on the NVIDIA DGX SuperPOD computing infrastructure.

Carrying the name of renowned Swedish scientist Jacob Berzelius — considered to be one of the founders of modern chemistry — the new BerzeLiUs supercomputer will deliver 300 petaflops of AI performance to power state-of-the-art AI research and deep learning models.

The effort is spearheaded by a 300 million Swedish Krona ($33.6 million) donation from the Knut and Alice Wallenberg Foundation to accelerate Swedish AI research across academia and industry. The foundation heads the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) network — the country’s largest private research initiative focused on AI innovation.

“I am extremely happy and proud that Linköping University will, through the National Supercomputer Centre, be host for this infrastructure”, says Jan-Ingvar Jönsson, vice-chancellor of Linköping University. “This gives us confidence that Sweden is not simply maintaining its international position, but also strengthening it.”

A Powerful New AI Resource

Hosting world-class supercomputers is nothing new for the team at Linköping University.

The Swedish National Supercomputer Center (NSC) already houses six traditional supercomputers on campus, with a combined total of 6 petaflops of performance. Included among these is Tetralith, which held the title of the most powerful supercomputer in the Nordics after its installation in 2018.

But with BerzeLiUs the team is making a huge leap.

“BerzeLiUs will be more than twice as fast as Tetralith,” confirmed Niclas Andersson, technical director at NSC. “This is a super-fast AI resource — the fastest computing cluster we have ever installed.”

The powerful new AI resource will boost collaboration between academia and leading Swedish industrial companies, primarily those financed by the Knut and Alice Wallenberg Foundation, such as the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) as well as other life science and quantum technology initiatives.

Full Speed Ahead

Building a leading AI supercomputer usually can take years of planning and development. But by building BerzeLiUs with NVIDIA DGX SuperPOD technology, Linköping will be able to deploy the fully integrated system and start running complex AI models as the new year begins.

The system will be built and installed by Atos. Initially, the supercomputer will consist of 60 NVIDIA DGX A100 systems interconnected across an NVIDIA Mellanox InfiniBand fabric and 1.5 petabytes of high-performance storage from DDN. BerzeLiUs will also feature the Atos Codex AI Suite, enabling researchers to speed up processing times on their complex data.

“This new supercomputer will supercharge AI research in Sweden,” said Jaap Zuiderveld, vice president for EMEA at NVIDIA. “It will position Sweden as a leader in academic research, and it will give Swedish businesses a competitive edge in telecommunications, design, drug development, manufacturing and more industries.”

Join Linköping University at GTC

Dive deeper into the cutting-edge research performed at Linköping University. Join Anders Eklund, associate professor at Linköping University, and Joel Hedlund, data director at AIDA, to explore how AI is powering innovation in radiology and pathology imaging.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register for GTC now through Oct. 9 using promo code CMB4KN to get 20 percent off. Academics, students, government, and nonprofit attendees join free when registering with their organization’s email address.

The post Swede-sational: Linköping University to Build Country’s Fastest AI Supercomputer appeared first on The Official NVIDIA Blog.

Read More

NVIDIA CEO Outlines Vision for ‘Age of AI’ in News-Packed GTC Kitchen Keynote

NVIDIA CEO Outlines Vision for ‘Age of AI’ in News-Packed GTC Kitchen Keynote

Outlining a sweeping vision for the “age of AI,” NVIDIA CEO Jensen Huang Monday kicked off this week’s GPU Technology Conference.

Huang made major announcements in data centers, edge AI, collaboration tools and healthcare in a talk simultaneously released in nine episodes, each under 10 minutes.

“AI requires a whole reinvention of computing – full-stack rethinking – from chips, to systems, algorithms, tools, the ecosystem,” Huang said, standing in front of the stove of his Silicon Valley home.

Behind a series of announcements touching on everything from healthcare to robotics to videoconferencing, Huang’s underlying story was simple: AI is changing everything, which has put NVIDIA at the intersection of changes that touch every facet of modern life.

More and more of those changes can be seen, first, in Huang’s kitchen, with its playful bouquet of colorful spatulas, that has served as the increasingly familiar backdrop for announcements throughout the COVID-19 pandemic.

“NVIDIA is a full stack computing company – we love working on extremely hard computing problems that have great impact on the world – this is right in our wheelhouse,” Huang said. “We are all-in, to advance and democratize this new form of computing – for the age of AI.”

This week’s GTC is one of the biggest yet. It features more than 1,000 sessions—400 more than the last GTC—in 40 topic areas. And it’s the first to run across the world’s time zones, with sessions in English, Chinese, Korean, Japanese, and Hebrew.

Accelerated Data Center 

Modern data centers, Huang explained, are software-defined, making them more flexible and adaptable.

That creates an enormous load. Running a data center’s infrastructure can consume 20-30 percent of its CPU cores. And as “east-west traffic, or traffic within a data center, and microservices increase, this load will increase dramatically.

“A new kind of processor is needed,” Huang explained: “We call it the data processing unit.”

The DPU consists of accelerators for networking, storage, security and programmable Arm CPUs to offload the hypervisor, Huang said.

The new NVIDIA BlueField 2 DPU is a programmable processor with powerful Arm cores and acceleration engines for at-line-speed processing for networking, storage and security. It’s the latest fruit of NVIDIA’s acquisition of high-speed interconnect provider Mellanox Technologies, which closed in April.

Data Center — DOCA — A Programmable Data Center Infrastructure Processor

NVIDIA also announced DOCA, its programmable data-center-infrastructure-on-a-chip architecture.

“DOCA SDKs let developers write infrastructure apps for software-defined networking, software-defined storage, cybersecurity, telemetry and in-network computing applications yet to be invented,” Huang said.

Huang also touched on a partnership with VMware, announced last week, to port VMware onto BlueField. VMware “runs the world’s enterprises — they are the OS platform in 70 percent of the world’s companies,” Huang explained.

Data Center — DPU Roadmap in ‘Full Throttle’

Further out, Huang said NVIDIA’s DPU roadmap shows advancements coming fast.

BlueField-2 is sampling now, BlueField-3 is finishing and BlueField-4 is in high gear, Huang reported.

“We are going to bring a ton of technology to networking,” Huang said. “In just a couple of years, we’ll span nearly 1,000 times in compute throughput” on the DPU.

BlueField-4, arriving in 2023, will add support for the CUDA parallel programming platform and NVIDIA AI — “turbocharging the in-network computing vision.”

You can get those capabilities now, Huang announced, with the new BlueField-2X. It adds an NVIDIA Ampere GPU to BlueField-2 for in-networking computing with CUDA and NVIDIA AI.

“Bluefield-2X is like having a Bluefield-4, today,” Huang said.

Data Center — GPU Inference Momentum

Consumer internet companies are also turning to NVIDIA technology to deliver AI services.

Inference — which puts fully-trained AI models to work — is key to a new generation of AI-powered consumer services.

In aggregate, NVIDIA GPU inference compute in the cloud already exceeds all cloud CPUs, Huang said.

Huang announced that Microsoft is adopting NVIDIA AI on Azure to power smart experiences on Microsoft Office, including smart grammar correction and text prediction.

Microsoft Office joins Square, Twitter, eBay, GE Healthcare and Zoox, among other companies, in a broad array of industries using NVIDIA GPUs for inference.

Data Center — Cloudera and VMware 

The ability to put vast quantities of data to work, fast, is key to modern AI and data science.

NVIDIA RAPIDS is the fastest extract, transform, load, or ETL, engine on the planet, and supports multi-GPU and multi-node.

NVIDIA modeled its API after hugely popular data science frameworks — Pandas, XGBoost and ScikitLearn — so RAPIDS is easy to pick up.

On the industry-standard data processing benchmark, running the 30 complex database queries on a 10TB dataset, a 16-node NVIDIA DGX cluster ran 20x faster than the fastest CPU server.

Yet it’s one-seventh the cost and uses one-third the power.

Huang announced that Cloudera, a hybrid-cloud data platform that lets you manage, secure, analyze and learn predictive models from data, will accelerate the Cloudera Data Platform with NVIDIA RAPIDS, NVIDIA AI and NVIDIA-accelerated Spark.

NVIDIA and VMware also announced a second partnership, Huang said.

The companies will create a data center platform that supports GPU acceleration for all three major computing domains today: virtualized, distributed scale-out and composable microservices.

“Enterprises running VMware will be able to enjoy NVIDIA GPU and AI computing in any computing mode,” Huang said. “

(Cutting) Edge AI 

Someday, Huang said, trillions of AI devices and machines will populate the Earth – in homes, office buildings, warehouses, stores, farms, factories, hospitals, airports.

The NVIDIA EGX AI platform makes it easy for the world’s enterprises to stand up a state-of-the-art edge-AI server quickly, Huang said. It can control factories of robots, perform automatic checkout at retail or help nurses monitor patients, Huang explained.

Huang announced the EGX platform is expanding to combine the NVIDIA Ampere GPU and BlueField-2 DPU on a single PCIe card. The updates give enterprises a common platform to build secure, accelerated data centers.

Huang also announced an early access program for a new service called NVIDIA Fleet Command. This new application makes it easy to deploy and manage updates across IoT devices, combining the security and real-time processing capabilities of edge computing with the remote management and ease of software-as-a-service.

Among the first companies provided early access to Fleet Command is KION Group, a leader in global supply chain solutions, which is using the NVIDIA EGX AI platform to develop AI applications for its intelligent warehouse systems.

Additionally, Northwestern Memorial Hospital, the No. 1 hospital in Illinois and one of the top 10 in the nation, is working with Whiteboard Coordinator to use Fleet Command for its IoT sensor platform.

“This is the iPhone moment for the world’s industries — NVIDIA EGX will make it easy to create, deploy and operate industrial AI services,” Huang said.

Edge AI — Democratizing Robotics

Soon, Huang added, everything that moves will be autonomous. AI software is the big breakthrough that will make robots smarter and more adaptable. But it’s the NVIDIA Jetson AI computer that will democratize robotics.

Jetson is an Arm-based SoC designed from the ground up for robotics. That’s thanks to the sensor processors, the CUDA GPU and Tensor Cores, and, most importantly, the richness of AI software that runs on it, Huang explained.

The latest addition to the Jetson family, the Jetson Nano 2GB, will be $59, Huang announced. That’s roughly half the cost of the $99 Jetson Nano Developer Kit announced last year.

“NVIDIA Jetson is mighty, yet tiny, energy-efficient and affordable,” Huang said.

Collaboration Tools

The shared, online world of the “metaverse” imagined in Neal Stephensen’s 1992 cyberpunk classic, “Snow Crash,” is already becoming real, in shared virtual worlds like Minecraft and Fortnite, Huang said.

First introduced in March 2019, NVIDIA Omniverse — a platform for simultaneous, real-time simulation and collaboration across a broad array of existing industry tools — is now in open beta.

“Omniverse allows designers, artists, creators and even AIs using different tools, in different worlds, to connect in a common world—to collaborate, to create a world together,” Huang said.

Another tool NVIDIA pioneered, NVIDIA Jarvis conversational AI, is also now in open beta, Huang announced. Using the new SpeedSquad benchmark, Huang showed it’s twice as responsive and more natural sounding when running on NVIDIA GPUs.

It also runs for a third of the cost, Huang said.

“What did I tell you?” Huang said, referring to a catch phrase he’s used in keynotes over the years. “The more you buy, the more you save.”

Collaboration Tools — Introducing NVIDIA Maxine

Video calls have moved from a curiosity to a necessity.

For work, social, school, virtual events, doctor visits — video conferencing is now the most critical application for many people. More than 30 million web meetings take place every day.

To improve this experience, Huang announced NVIDIA Maxine, a cloud-native streaming video AI platform for applications like video calls.

Using AI, Maxine can reduce the bandwidth consumed by video calls by a factor of 10. “AI can do magic for video calls,” Huang said.

“With Jarvis and Maxine, we have the opportunity to revolutionize video conferencing of today and invent the virtual presence of tomorrow,” Huang said.

Healthcare 

When it comes to drug discovery amidst the global COVID-19 pandemic, lives are on the line.

Yet for years the costs of new drug discovery for the $1.5 trillion pharmaceutical industry have risen. New drugs take over a decade to develop, cost over $2.5 billion in research and development — doubling every nine years — and 90 percent of efforts fail.

New tools are needed. “COVID-19 hits home this urgency,” Huang said.

Using breakthroughs in computer science, we can begin to use simulation and in-silico methods to understand the biological machinery of the proteins that affect disease and search for new drug candidates, Huang explained.

To accelerate this, Huang announced NVIDIA Clara Discovery — a state-of-the-art suite of tools for scientists to discover life-saving drugs.

“Where there are popular industry tools, our computer scientists accelerate them,” Huang said. “Where no tools exist, we develop them — like NVIDIA Parabricks, Clara Imaging, BioMegatron, BioBERT, NVIDIA RAPIDS.”

Huang also outlined an effort to build the U.K.’s fastest supercomputer, Cambridge-1, bringing state-of-the-art computing infrastructure to “an epicenter of healthcare research.”

Cambridge-1 will boast 400 petaflops of AI performance, making it among the world’s top 30 fastest supercomputers. It will host NVIDIA’s U.K. AI and healthcare collaborations with academia, industry and startups.

NVIDIA’s first partners are AstraZeneca, GSK, King’s College London, the Guy’s and St Thomas’ NHS Foundation Trust and startup Oxford Nanopore.

NVIDIA also announced a partnership with GSK to build the world’s first AI drug discovery lab.

Arm

Huang wrapped up his keynote with an update on NVIDIA’s partnership with Arm, whose power-efficient designs run the world’s smart devices.

NVIDIA agreed to acquire the U.K. semiconductor designer last month for $40 billion.

“Arm is the most popular CPU in the world,” Huang said. “Together, we will offer NVIDIA accelerated and AI computing technologies to the Arm ecosystem.”

Last year, Huang said, NVIDIA announced it would port CUDA and our scientific computing stack to Arm. Today, Huang announced a major initiative to advance the Arm platform — we’re making investments across three dimensions:

  • First, NVIDIA will complement Arm partners with GPU, networking, storage and security technologies to create complete accelerated platforms.
  • Second, NVIDIA is working with Arm partners to create platforms for HPC, cloud, edge and PC — this requires chips, systems and system software.
  • And third, NVIDIA is porting the NVIDIA AI and NVIDIA RTX engines to Arm.

“Today, these capabilities are available only on x86,” Huang said, “With this initiative, Arm platforms will also be leading-edge at accelerated and AI computing.”

 

The post NVIDIA CEO Outlines Vision for ‘Age of AI’ in News-Packed GTC Kitchen Keynote appeared first on The Official NVIDIA Blog.

Read More

NVIDIA AI on Microsoft Azure Machine Learning to Power Grammar Suggestions in Microsoft Editor for Word

NVIDIA AI on Microsoft Azure Machine Learning to Power Grammar Suggestions in Microsoft Editor for Word

It’s been said that good writing comes from editing. Fortunately for discerning readers everywhere, Microsoft is putting an AI-powered grammar editor at the fingertips of millions of people.

Like any good editor, it’s quick and knowledgeable. That’s because Microsoft Editor’s grammar refinements in Microsoft Word for the web can now tap into NVIDIA Triton Inference Server, ONNX Runtime and Microsoft Azure Machine Learning, which is part of Azure AI, to deliver this smart experience.

Speaking at the digital GPU Technology Conference, NVIDIA CEO Jensen Huang announced the news during the keynote presentation on October 5.

Everyday AI in Office

Microsoft is on a mission to wow users of Office productivity apps with the magic of AI. New, time-saving experiences will include real-time grammar suggestions, question-answering within documents — think Bing search for documents beyond “exact match” — and predictive text to help complete sentences.

Such productivity-boosting experiences are only possible with deep learning and neural networks. For example, unlike services built on traditional rules-based logic, when it comes to correcting grammar, Editor in Word for the web is able to understand the context of a sentence and suggest the appropriate word choices.

 

And these deep learning models, which can involve hundreds of millions of parameters, must be scalable and provide real-time inference for an optimal user experience. Microsoft Editor’s AI model  for grammar checking in Word on the web alone is expected to handle more than 500 billion queries a year.

Deployment at this scale could blow up deep learning budgets. Thankfully, NVIDIA Triton’s dynamic batching and concurrent model execution features, accessible through Azure Machine Learning, slashed the cost by about 70 percent and achieved a throughput of 450 queries per second on a single NVIDIA V100 Tensor Core GPU, with less than 200-millisecond response time. Azure Machine Learning provided the required scale and capabilities to manage the model lifecycle such as versioning and monitoring.

High Performance Inference with Triton on Azure Machine Learning

Machine learning models have expanded in size, and GPUs have become necessary during model training and deployment. For AI deployment in production, organizations are looking for scalable inference serving solutions, support for multiple framework backends, optimal GPU and CPU utilization and machine learning lifecycle management.

The NVIDIA Triton and ONNX Runtime stack in Azure Machine Learning deliver scalable high-performance inferencing. Azure Machine Learning customers can take advantage of Triton’s support for multiple frameworks, real time, batch and streaming inferencing, dynamic batching and concurrent execution.

Writing with AI in Word

Author and poet Robert Graves was quoted as saying, “There is no good writing, only good rewriting.”  In other words, write, and then edit and improve.

Editor in Word for the web lets you do both simultaneously. And while Editor is the first feature in Word to gain the speed and breadth of advances enabled by Triton and ONNX Runtime, it is likely just the start of more to come.

 

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

 

The post NVIDIA AI on Microsoft Azure Machine Learning to Power Grammar Suggestions in Microsoft Editor for Word appeared first on The Official NVIDIA Blog.

Read More

To 3D and Beyond: Pixar’s USD Coming to an Industry Near You

To 3D and Beyond: Pixar’s USD Coming to an Industry Near You

It was the kind of career moment developers dream of but rarely experience. To whoops and cheers from the crowd at SIGGRAPH 2016, Dirk Van Gelder of Pixar Animation Studios launched Universal Scene Description.

USD would become the open-source glue filmmakers used to bind their favorite tools together so they could collaborate with colleagues around the world, radically simplifying the job of creating animated movies. At its birth, it had backing from three seminal partners—Autodesk, Foundry and SideFX.

Today, more than a dozen companies from Apple to Unity support USD. The standard is on the cusp of becoming the solder that fuses all sorts of virtual and physical worlds into environments where everything from skyscrapers to sports cars and smart cities will be designed and tested in simulation.

What’s more, it’s helping spawn machinima, an emerging form of digital storytelling based on game content.

How USD Found an Audience

The 2016 debut “was pretty exciting” for Van Gelder, who spent more than 20 years developing Pixar’s tools.

“We had talked to people about USD, but we weren’t sure they’d embrace it,” he said. “I did a live demo on a laptop of a scene from Finding Dory so they could see USD’s scalability and performance and what we at Pixar could do with it, and they really got the message.”

One of those in the crowd was Rev Lebaredian, vice president of simulation technology at NVIDIA.

“Dirk’s presentation of USD live and in real time inspired us. It triggered a series of ideas and events that led to what is NVIDIA Omniverse today, with USD as its soul. So, it was fate that Dirk would end up on the Omniverse team,” said Lebaredian of the 3D graphics platform, now in open beta, that aims to carry the USD vision forward.

Developers Layer Effects on 3D Graphics

Adobe’s developers were among many others who welcomed USD and now support it in their products.

“USD has a whole world of features that are incredibly powerful,” said Davide Pesare, who worked on USD at Pixar and is now a senior R&D manager at Adobe.

“For example, with USD layering, artists can work in the same scene without stepping on each other’s toes. Each artist has his or her own layer, so you can let the modeler work while someone else is building the shading,” he said.

“Today USD has spread beyond the film industry where it is pervasive in animation and special effects. Game developers are looking at it, Apple’s products can read it, we have partners in architecture using it and the number of products compatible with USD is only going to grow,” Pesare said.

CityEngine uses USD
Thinking on a grand scale: NVIDIA and partner Esri, a specialist in mapping software, are both building virtual worlds using USD.

Building a Virtual 3D Home for Architects

Although it got its start in the movies, USD can play many roles.

Millions of architects, engineers and designers need a way to quickly review progress on construction projects with owners and real-estate developers. Each stakeholder wants different programs often running on different computers, tablets or even handsets. It’s a script for an IT horror film where USD can write a happy ending.

Companies such as Autodesk, Bentley Systems, McNeel & Associates and Trimble Inc. are already exploring what USD can do for this community. NVIDIA used Omniverse to create a video showing some of the possibilities, such as previewing how the sun will play on the glassy interior of a skyscraper through the day.

Product Design Comes Alive with USD

It’s a similar story with a change of scene in the manufacturing industry. Here, companies have a cast of thousands of complex products they want to quickly design and test, ranging from voice-controlled gadgets to autonomous trucks.

The process requires iterations using programs in the hands of many kinds of specialists who demand photorealistic 3D models. Beyond de rigueur design reviews, they dream of the possibilities like putting visualizations in the hands of online customers.

Showing the shape of things to come, the Omniverse team produced a video for the debut of the NVIDIA DGX A100 system with exploding views of how its 30,000 components snap into a million drill holes. More recently, it generated a video of NVIDIA’s GeForce RTX 30 Series graphics card, (below) complete with a virtual tour of its new cooling subsystem, thanks to USD in Omniverse.

“These days my team spends a lot of time working on real-time physics and other extensions of USD for autonomous vehicles and robotics for the NVIDIA Isaac and DRIVE platforms,” Van Gelder said.

To show what’s possible today, engineers used USD to import into Omniverse an accurately modelled luxury car and details of a 17-mile stretch of highway around NVIDIA’s Silicon valley headquarters. The simulation, to be shown this week at GTC, shows the potential for environments detailed enough to test both vehicles and their automated driving capabilities.

Another team imported Kaya, a robotic car for consumers, so users could program the digital model and test its behavior in an Omniverse simulation before building or buying a physical robot.

The simulation was accurate despite the fact “the wheels are insanely complex because they can drive forward, backward or sideways,” said Mike Skolones, manager of the team behind NVIDIA Isaac Sim.

Lights! Camera! USD!

In gaming, Epic’s Unreal Engine supports USD and Unity and Blender are working to support it as well. Their work is accelerating the rise of machinima, a movie-like spinoff from gaming demonstrated in a video for NVIDIA Omniverse Machinima.

Meanwhile, back in Hollywood, studios are well along in adopting USD.

Pixar produced Finding Dory using USD. Dreamworks Animation described its process adopting USD to create the 2019 feature How to Train Your Dragon: The Hidden World. Disney Animation Studios blended USD into its pipeline for animated features, too.

Steering USD into the Omniverse

NVIDIA and partners hope to take USD into all these fields and more with Omniverse, an environment one team member describes as “like Google Docs for 3D graphics.”

Omniverse plugs the power of NVIDIA RTX real-time ray-tracing graphics into USD’s collaborative, layered editing. The recent “Marbles at Night” video (below) showcased that blend, created by a dozen artists scattered across the U.S., Australia, Poland, Russia and the U.K.

That’s getting developers like Pesare of Adobe excited.

“All industries are going to want to author everything with real time texturing, modeling, shading and animation,” said Pesare.

That will pave the way for a revolution in people consuming real-time media with AR and VR glasses linked on 5G networks for immersive, interactive experience anywhere, he added.

He’s one of more than 400 developers who’ve had a hands-on with Omniverse so far. Others come from companies like Ericsson, Foster & Partners and Industrial Light & Magic.

USD Gives Lunar Explorers a Hand

The Frontier Development Lab (FDL), a NASA partner, recently approached NVIDIA for help simulating light on the surface of the moon.

Using data from a lunar satellite, the Omniverse team generated images FDL used to create a video for a public talk, explaining its search for water ice on the moon and a landing site for a lunar rover.

Back on Earth, challenges ahead include using USD’s Hydra renderer to deliver content at 30 frames per second that might blend images from a dozen sources for a filmmaker, an architect or a product designer.

“It’s a Herculean effort to get this in the hands of the first customers for production work,” said Richard Kerris, general manager of NVIDIA’s media and entertainment group and former chief technologist at Lucasfilm. “We’re effectively building an operating system for creatives across multiple markets, so support for USD is incredibly important,” he said.

Kerris called on anyone with an RTX-enabled system to get their hands on the open beta of Omniverse and drive the promise of USD forward.

“We can’t wait to see what you will build,” he said.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post To 3D and Beyond: Pixar’s USD Coming to an Industry Near You appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Jarvis and Merlin Announced in Open Beta, Enabling Conversational AI and Democratizing Recommenders

NVIDIA Jarvis and Merlin Announced in Open Beta, Enabling Conversational AI and Democratizing Recommenders

We’ve all been there: on a road trip and hungry. Wouldn’t it be amazing to ask your car’s driving assistant and get recommendations to nearby food, personalized to your taste?

Now, it’s possible for any business to build and deploy such experiences and many more with NVIDIA GPU systems and software libraries. That’s because NVIDIA Jarvis for conversational AI services and NVIDIA Merlin for recommender systems have entered open beta. Speaking today at the GPU Technology Conference, NVIDIA CEO Jensen Huang announced the news.

While AI for voice services and recommender systems has never been more needed in our digital worlds, development tools have lagged. And the need for better voice AI services is rising sharply.

More people are working from home and remotely learning, shopping, visiting doctors and more, putting strains on services and revealing shortcomings in user experiences. Some call centers report a 34 percent increase in hold times and a 68 percent increase in call escalations, according to a report from Harvard Business Review.

Meanwhile, current recommenders personalize the internet but often come up short. Retail recommenders suggest items recently purchased or continue pursuing people with annoying promos. Media and entertainment recommendations are often more of the same and not diverse. These systems are often fairly crude because they only go off of past recommendations or similarities.

NVIDIA Jarvis and NVIDIA Merlin allow companies to explore larger deep learning models, and develop more nuanced and intelligent recommendation systems. Conversational AI services built on Jarvis and recommender systems built on Merlin offer the fast track forward to better services from businesses.

Early Access Jarvis Adopter Advances

Some companies in the NVIDIA Developer program have already begun work on conversational AI services with NVIDIA Jarvis. Early adopters included Voca, an AI agent for call center support; Kensho, for automatic voice transcriptions for finance and business; and Square, offering a virtual assistant for scheduling appointments.

London-based Intelligent Voice, which offers high-performance speech recognition services, is always looking for more, said its CTO, Nigel Cannings.

“Jarvis takes a multimodal approach that fuses key elements of automatic speech recognition with entity and intent matching to address new use cases where high-throughput and low latency are required,” he said. “The Jarvis API is very easy to use, integrate and customize to our customers’ workflows for optimized performance.”

It has allowed Intelligent Voice to pivot quickly during the COVID crisis to bring to market in record time a complete new product, Myna, that allows accurate and useful meeting recall.

Better Conversational AI Needed

In the U.S., call center assistants handle 200 million calls per day, and telemedicine services enable 2.4 million daily physician visits, demanding transcriptions with high accuracy.

Traditional voice systems leave room for improvement. With processing constrained by CPUs, their lower quality models result in lag-filled robotic voice products. Jarvis includes Megatron-BERT models, the largest today, to offer the highest accuracy and lowest latency.

Deploying real-time conversational AI for natural interactions requires model computations in under 300 milliseconds — versus 600 milliseconds on CPU-powered models.

Jarvis provides more natural interactions through sensor fusion — the integration of video cameras and microphones. Its ability to handle multiple data streams in real time enables the delivery of improved services.

Complex Model Pipelines, Easier Solutions

Model pipelines in conversational AI can be complex and require coordination across multiple services.

Microservices are required to run at scale with automatic speech recognition models, natural language understanding, text-to-speech and domain-specific apps. These super-specialized tasks, sped up when run in parallel processing, gain a 3x cost advantage over a competing CPU-only server.

NVIDIA Jarvis is a comprehensive framework, offering software libraries for building conversational AI applications and including GPU-optimized services for ASR, NLU, TTS and computer vision that use the latest deep learning models.

Developers can meld these multiple skills within their applications, and quickly help our hungry vacationer find just the right place.

Merlin Creates a More Relevant Internet

Recommender systems are the engine of the personalized internet and they’re everywhere online. They suggest food you might like, offer items related to your purchases and can capture your interest in the moment with retargeted advertising for product offers as you bounce from site to site.

But when recommenders don’t do their best, people may walk away empty-handed and businesses leave money on the table.

On some of the world’s largest online commerce sites, recommender systems account for as much as 30 percent of revenue. Just a 1 percent improvement in the relevance of recommendations can translate into billions of dollars in revenue.

Recommenders at Scale on GPUs

At Tencent, recommender systems support videos, news, music and apps. Using NVIDIA Merlin, the company reduced its recommender training time from 20 hours to three.

“With the use of the Merlin HugeCTR advertising recommendation acceleration framework, our advertising business model can be trained faster and more accurately, which is expected to improve the effect of online advertising,” said Ivan Kong, AI technical leader at Tencent TEG.

Democratizes Access to Recommenders

Now everyone has access to the NVIDIA Merlin application framework, which allows businesses of all kinds to build recommenders accelerated by NVIDIA GPUs.

Merlin’s collection of libraries includes tools for building deep learning-based systems that provide better predictions than traditional methods and increase click-through rates. Each stage of the pipeline is optimized to support hundreds of terabytes of data, all accessible through easy-to-use APIs.

Merlin is used at one of the world’s largest media companies and is in testing with hundreds of companies worldwide. Social media giants in the U.S. are experimenting with its ability to share related news. Streaming media services are testing it for suggestions on next views and listens. And major retailers are looking at it for suggestions on next items to purchase.

Those who are interested can learn more about the technology advances behind Merlin since its initial launch, including its support for  NVTabular, multi-GPU support, HugeCTR and NVIDIA Triton Inference Server.

Businesses can sign up for the NVIDIA Jarvis beta for access to the latest developments in conversational AI, and get started with the NVIDIA Merlin beta for the fastest way to upload terabytes of training data and deploy recommenders at scale.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

 

The post NVIDIA Jarvis and Merlin Announced in Open Beta, Enabling Conversational AI and Democratizing Recommenders appeared first on The Official NVIDIA Blog.

Read More

AI Can See Clearly Now: GANs Take the Jitters Out of Video Calls

AI Can See Clearly Now: GANs Take the Jitters Out of Video Calls

Ming-Yu Liu and Arun Mallya were on a video call when one of them started to break up, then freeze.

It’s an irksome reality of life in the pandemic that most of us have shared. But unlike most of us, Liu and Mallya could do something about it.

They are AI researchers at NVIDIA and specialists in computer vision. Working with colleague Ting-Chun Wang, they realized they could use a neural network in place of the software called a video codec typically used to compress and decompress video for transmission over the net.

Their work enables a video call with one-tenth the network bandwidth users typically need. It promises to reduce bandwidth consumption by orders of magnitude in the future.

“We want to provide a better experience for video communications with AI so even people who only have access to extremely low bandwidth can still upgrade from voice to video calls,” said Mallya.

Better Connections Thanks to GANs

The technique works even when callers are wearing a hat, glasses, headphones or a mask. And just for fun, they spiced up their demo with a couple bells and whistles so users can change their hair styles or clothes digitally or create an avatar.

A more serious feature in the works (shown at top) uses the neural network to align the position of users’ faces for a more natural experience. Callers watch their video feeds, but they appear to be looking directly at their cameras, enhancing the feeling of a face-to-face connection.

“With computer vision techniques, we can locate a person’s head over a wide range of angles, and we think this will help people have more natural conversations,” said Wang.

Say hello to the latest way AI is making virtual life more real.

How AI-Assisted Video Calls Work

The mechanism behind AI-assisted video calls is simple.

A sender first transmits a reference image of the caller, just like today’s systems that typically use a compressed video stream. Then, rather than sending a fat stream of pixel-packed images, it sends data on the locations of a few key points around the user’s eyes, nose and mouth.

A generative adversarial network on the receiver’s side uses the initial image and the facial key points to reconstruct subsequent images on a local GPU. As a result, much less data is sent over the network.

Liu’s work in GANs hit the spotlight last year with GauGAN, an AI tool that turns anyone’s doodles into photorealistic works of art. GauGAN has already been used to create more than a million images and is available at the AI Playground.

“The pandemic motivated us because everyone is doing video conferencing now, so we explored how we can ease the bandwidth bottlenecks so providers can serve more people at the same time,” said Liu.

GPUs Bust Bandwidth Bottlenecks

The approach is part of an industry trend of shifting network bottlenecks into computational tasks that can be more easily tackled with local or cloud resources.

“These days lots of companies want to turn bandwidth problems into compute problems because it’s often hard to add more bandwidth and easier to add more compute,” said Andrew Page, a director of advanced products in NVIDIA’s media group.

NVIDIA Maxine bundles a suite of tools for video conferencing and streaming services.

AI Instruments Tune Video Services

GAN video compression is one of several capabilities coming to NVIDIA Maxine, a cloud-AI video-streaming platform to enhance video conferencing and calls. It packs audio, video and conversational AI features in a single toolkit that supports a broad range of devices.

Announced this week at GTC, Maxine lets service providers deliver video at super resolution with real-time translation, background noise removal and context-aware closed captioning. Users can enjoy features such as face alignment, support for virtual assistants and realistic animation of avatars.

“Video conferencing is going through a renaissance,” said Page. “Through the pandemic, we’ve all lived through its warts, but video is here to stay now as a part of our lives going forward because we are visual creatures.”

Maxine harnesses the power of NVIDIA GPUs with Tensor Cores running software such as NVIDIA Jarvis, an SDK for conversational AI that delivers a suite of speech and text capabilities. Together, they deliver AI capabilities that are useful today and serve as building blocks for tomorrow’s video products and services.

Learn more about NVIDIA Research.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post AI Can See Clearly Now: GANs Take the Jitters Out of Video Calls appeared first on The Official NVIDIA Blog.

Read More