The Future’s So Bright: NVIDIA DRIVE Shines at Auto Shanghai

NVIDIA DRIVE-powered cars electrified the atmosphere this week at Auto Shanghai.

The global auto show is the oldest in China and has become the stage to debut the latest vehicles. And this year, automakers, suppliers and startups developing on NVIDIA DRIVE brought a new energy to the event with a wave of intelligent electric vehicles and self-driving systems.

The automotive industry is transforming into a technology industry — next-generation lineups will be completely programmable and connected to a network, supported by software engineers who will invent new software and services for the life of the car.

Just as the battery capacity of an electric vehicle provides miles of range, the computing capacity of these new vehicles will give years of new delight.

EVs for Everyday

Automakers have been introducing electric vehicle technology with one or two specialized models. Now, these lineups are becoming diversified, with an EV for every taste.

The all-new Mercedes-Benz EQB.

Joining the recently launched EQS flagship sedan and EQA SUV on the showfloor, the Mercedes-Benz EQB adds a new flavor to the all-electric EQ family. The compact SUV brings smart electromobility in a family size, with seven seats and AI features.

The latest generation MBUX AI cockpit, featured in the Mercedes-Benz EQB.

Like its EQA sibling, the EQB features the latest generation MBUX AI cockpit, powered by NVIDIA DRIVE. The high-performance system includes an augmented reality head-up display, AI voice assistant and rich interactive graphics to enable the driver to enjoy personalized, intelligent features.

EV maker Xpeng is bringing its new energy technology to the masses with the P5 sedan. It joins the P7 sports sedan in offering intelligent mobility with NVIDIA DRIVE.

The Xpeng P5.

The P5 will be the first to bring Xpeng’s Navigation Guided Pilot (NGP) capabilities to public roads. The automated driving system leverages the automaker’s full-stack XPILOT 3.5, powered by NVIDIA DRIVE AGX Xavier. The new architecture processes data from 32 sensors — including two lidars, 12 ultrasonic sensors, five millimeter-wave radars and 13 high-definition cameras — integrated into 360-degree dual-perception fusion to handle challenging and complex road conditions.

Also making its auto show debut was the NIO ET7, which was first unveiled during a company event in January. The ET7 is the first vehicle that features NIO’s Adam supercomputer, which leverages four NVIDIA DRIVE Orin processors to achieve more than 1,000 trillion operations per second (TOPS).

The NIO ET7.

The flagship vehicle leapfrogs current model capabilities, with more than 600 miles of battery range and advanced autonomous driving. With Adam, the ET7 can perform point-to-point autonomy, using 33 sensors and high-performance compute to continuously expand the domains in which it operates — from urban to highway driving to battery swap stations.

Elsewhere on the showfloor, SAIC’s R Auto exhibited the intelligent ES33. This smart, futuristic vehicle equipped with R-Tech leverages the high performance of NVIDIA DRIVE Orin to deliver automated driving features for a safer, more convenient ride.

The R-Auto ES33.

SAIC- and Alibaba-backed IM Motors — which stands for intelligence in motion — also made its auto show debut with the electric L7 sedan and SUV, powered by NVIDIA DRIVE. These first two vehicles will have autonomous parking and other automated driving features, as well as a 93kWh battery that comes standard.

The IM Motors L7.

Improving Intelligence

In addition to automaker reveals, suppliers and self-driving startups showcased their latest technology built on NVIDIA DRIVE.

The scalable ZF ProAI Supercomputer.

Global supplier ZF continued to push the bounds of autonomous driving performance with the latest iteration of its ProAI Supercomputer. With NVIDIA DRIVE Orin at its core, the scalable autonomous driving compute platform supports systems with level 2 capabilities all the way to full self-driving, with up to 1,000 TOPS of performance.

A Momenta test vehicle with MPilot automated driving system.

Autonomous driving startup Momenta demonstrated the newest capabilities of MPilot, its autopilot and valet parking system. The software, which is designed for mass production vehicles, leverages DRIVE Orin, which enhances production efficiency for a more streamlined time to market.

From advanced self-driving systems to smart, electric vehicles of all sizes, the NVIDIA DRIVE ecosystem stole the show this week at Auto Shanghai.

The post The Future’s So Bright: NVIDIA DRIVE Shines at Auto Shanghai appeared first on The Official NVIDIA Blog.

Read More

Hanging in the Balance: More Research Coordination, Collaboration Needed for AI to Reach Its Potential, Experts Say

As AI is increasingly established as a world-changing field, the U.S. has an opportunity not only to demonstrate global leadership, but to establish a solid economic foundation for the future of the technology.

A panel of experts convened last week at GTC to shed light on this topic, with the co-chairs of the Congressional AI Caucus, U.S. Reps. Jerry McNerney (D-CA) and Anthony Gonzalez (R-OH), leading a discussion that reflects Washington’s growing interest in the topic.

The panel also included Hodan Omaar, AI policy lead at the Center for Data Innovation; Russell Wald, director of policy at Stanford University’s Institute for Human-Centered AI and Damon Woodard, director of AI partnerships at University of Florida’s AI Initiative.

“AI is getting increased interest among my colleagues on both sides of the aisle, and this is going to continue for some time,” McNerney said. Given that momentum, Gonzalez said the U.S. should be on the bleeding edge of AI development “for both economic and geopolitical reasons.”

Along those lines, the first thing the pair wanted to learn was how panelists viewed the importance of legislative efforts to fund and support AI research and development.

Wald expressed enthusiasm over legislation Congress passed last year as part of the National Defense Authorization Act, which he said would have an expansive effect on the market for AI.

Wald also said he was surprised at the findings of Stanford’s “Government by Algorithm” report, which detailed the federal government’s use of AI to do things such as track suicide risk among veterans, support SEC insider trading investigations and identify Medicare fraud.

Woodard suggested that continued leadership and innovation coming from Washington is critical if AI is to deliver on its promise.

“AI can play a big role in the economy,” said Woodard. “Having this kind of input from the government is important before we can have the kind of advancements that we need.”

The Role of Universities

Woodard and UF are already doing their part. Woodard’s role at the school includes helping transform it into a so-called “AI university.” In response to a question from Gonzalez about what that transition looks like, he said it required establishing a world-class AI infrastructure, performing cutting-edge AI research and incorporating AI throughout the curriculum.

“We want to make sure every student has some exposure to AI as it relates to their field of study,” said Woodard.

He said the school has more than 200 faculty members engaged in AI-related research, and that it’s committed to hiring 100 more. And while Woodard believes the university’s efforts will lead to more qualified AI professionals and AI innovation around its campus in Gainesville, he also said that partnerships, especially those that encourage diversity, are critical to encouraging more widespread industry development.

Along those lines, UF has joined an engineering consortium and will provide 15 historically Black colleges and two Hispanic-serving schools with access to its prodigious AI resources.

Omaar said such efforts are especially important when considering how unequally the high performance computing resources needed to conduct AI research are distributed.

In response to a question from McNerney about a recent National Science Foundation report, Omaar noted the finding that the U.S. Department of Energy is only providing support to about a third of the researchers seeking access to HPC resources.

“Many universities are conducting AI research without the tools they need,” she said.

Omaar said she’d like to see the NSF focus its funding on supporting efforts in states where HPC resources are scarce but AI research activity is high.

McNerney announced that he would soon introduce legislation requiring NSF to determine what AI resources are necessary for significant research output.

Moving Toward National AI Research Resources

The myriad challenges points to the benefits that could come from a more coordinated national effort. To that end, Gonzalez asked about the potential of the National AI Research Resource Task Force Act, and the national AI research cloud that would result from it.

Wald called the legislation a “game-changing AI initiative,” noting that the limited number of universities with AI research computing resources has pushed AI research into the private sector, where the objectives are driven by shorter-term financial goals rather than long-term societal benefits.

“What we see is an imbalance in the AI research ecosystem,” Wald said. The federal legislation would establish a pathway for a national AI research hub, which “has the potential to unleash American AI innovation,” he said.

The way Omaar sees it, the nationwide collaboration that would likely result — among politicians, industry and academia — is necessary for AI to reach its potential.

“Since AI will impact us all,” she said, “it’s going to need everyone’s contribution.”

The post Hanging in the Balance: More Research Coordination, Collaboration Needed for AI to Reach Its Potential, Experts Say appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Partners with Boys & Girls Clubs of Western Pennsylvania on AI Pathways Program

Meet Paige Frank: Avid hoopster, Python coder and robotics enthusiast.

Still in high school, the Pittsburgh sophomore is so hooked on AI and robotics, she’s already a mentor to other curious teens.

“Honestly, I never was that interested in STEM. I wanted to be a hair stylist as a kid, which is also cool, but AI is clearly important for our future!” said Paige. “Everything changed in my freshman year, when I heard about the AI Pathways Institute.”

The initiative, known as AIPI for short, began in 2019 as a three-week pilot program offered by Boys & Girls Clubs of Western Pennsylvania (BGCWPA). Paige was in the first cohort of 40 youth to attend AIPI, which also included Tomi Oalore (left) and Makiyah Carrington (right), shown above.

Paige Frank, robotics enthusiast
Paige Frank, robotics enthusiast and BGCWPA teen mentor

Building on the success of that program, NVIDIA and BGCWPA this week have entered into a three-year partnership with the goal of expanding access to AI education to more students, particularly those from underserved and underrepresented communities.

Core to the collaboration is the creation of an AI Pathways Toolkit to make it easy for Boys & Girls Clubs nationwide and other education-focused organizations to deliver the curriculum to their participants.

“At first it was hard. But once we understood AI fundamentals from the AIPI coursework that the staff at BGCWPA taught and by using the NVIDIA Jetson online videos, it all began to come together,” said Paige. “Learning robotics hands-on with the Jetson Nano made it much easier. And it was exciting to actually see our programming in action as the Jetbot robot navigated the maze we created for the project.”

New AI Pathways to the Future

AI is spreading rapidly. But a major challenge to developing AI skills is access to hands-on learning and adequate computing resources. The AI Pathways Toolkit aims to make AI and robotics curriculum accessible for all students, even those without previous coding experience. It’s meant to prepare — and inspire — more students, like Paige, to see themselves as builders of our AI future.

Another obstacle to AI skills development can be perception. “I wasn’t that excited at first — there’s this thing that it’s too nerdy,” commented Paige, who says most of her friends felt similarly. “But once you get into coding and see how things work on the Jetbot, it’s real fun.”

She sees this transformation in action at her new internship as a mentor with the BGCWPA, where she helps kids get started with AI and coding. “Even kids who aren’t that involved at first really get into it. It’s so inspiring,” she said.

Boys & Girls on an AI Mission

Comprising 14 clubhouses and two Career Works Centers, BGCWPA offers programs, services and outreach that serve more than 12,000 youth ages 4-18 across the region. The AIPI is a part of its effort to provide young people with the tools needed to activate and advance their potential.

With support from NVIDIA, BGCWPA developed the initial three-week AIPI summer camp to introduce local high school students to AI and machine learning. Its curriculum was developed by BGCWPA Director of STEM Education Christine Nguyen and representatives from Carnegie Mellon University using NVIDIA’s educational materials, including the Jetson AI Specialist certification program.

The pilot in 2019 included two local summer camps with a focus on historically underrepresented communities encompassing six school districts. The camp attendees also created a hands-on project using the award-winning Jetson Nano developer kit and Jetbot robotics toolkit.

“We know how important it is to provide all students with opportunities to impact the future of technology,” said Nguyen. “We’re excited to utilize the NVIDIA Jetson AI certification materials with our students as they work toward being leaders in the fields of AI and robotics.”

Students earned a stipend in a work-based learning experience, and all of the participants demonstrated knowledge gained in the “Five Big Ideas in AI,” a framework created by AI4K12, a group working to develop guidelines for K-12 AI education. They also got to visit companies and see AI in action, learn human-centered design and present a capstone project that focused on a social problem they wanted to solve with AI.

“With the support of NVIDIA, we’re helping students from historically underrepresented communities build confidence and skills in the fields of AI, ML and robotics,” said Lisa Abel-Palmieri, Ph.D., president and CEO of BGCWPA. “Students are encouraged to develop personal and professional connections with a diverse group of peers who share similar passions. We also equip participants with the vital knowledge and tools to implement technology that addresses bias in AI and benefits society as a whole.”

From Summer Camp to Yearlong Program

Helping youth get started on a pathway to careers in AI and robotics has become an urgent need. Moreover, learning to develop AI applications requires real-world skills and resources that are often scarce in underserved and underrepresented communities.

NVIDIA’s partnership with BGCWPA includes a funding grant and access to technical resources, enabling the group to continue to develop a new AI Pathways Toolkit and open-source curriculum supported by staff tools and training.

The curriculum scales the summer camp model into a yearlong program that creates a pathway for students to gain AI literacy through hands-on development with the NVIDIA Jetson Nano and Jetbot kits. And the tools and training will make it easy for educators, including the Boys & Clubs’ Youth Development Professionals, to deliver the curriculum to their students.

The toolkit, when completed, will be made available to the network of Boys & Girls Clubs across the U.S., with the goal of implementing the program at 80 clubs by the middle of 2024. The open-source curriculum will also be available to other organizations interested in implementing AI education programs around the world.

As for Paige’s future plans: “I want to strengthen my coding skills and become a Python pro. I also would like to start a robotics club at my high school. And I definitely want to pursue computer science in college. I have a lot of goals,” she said.

Boys & Girls Club Joins AI Educators at GTC21

Abel-Palmieri was a featured panelist at a special event at GTC21 last week. With a record 1,600+ sessions this year, GTC offers a wealth of content — from getting started with AI for those new to the field, to advanced sessions for developers deploying real-world robotics applications. Register for free to view on-demand replays.

Joining Abel-Palmieri on the panel, “Are You Smarter Than a Fifth Grader Who Knows AI?” (SE2802), were Babak Mostaghimi, assistant superintendent of Curriculum, Instructional Support and Innovation for Gwinnett County Public Schools of Suwanee, Georgia; Jim Gibbs, CEO of Meter Feeder; Justin “Mr. Fascinate” Shaifer; and Maynard Okereke (a.k.a. Hip Hop MD) from STEM Success Summit.

Free GTC sessions to help students learn the basics of AI or brush up robotics skills include:

  • Jetson 101: Learning Edge AI Fundamentals (S32700)
  • Build Edge AI Projects with the Jetson Community (S32750)
  • Optimizing for Edge AI on Jetson (S32354)

Many GTC hands-on sessions are designed to help educators learn and teach AI, including: “Duckietown on NVIDIA Jetson: Hands-On AI in the Classroom” with ETH Zurich (S32637) and “Hands-On Deep Learning Robotics Curriculum in High Schools with Jetson Nano” with CAVEDU Education (S32702).

NVIDIA has also launched the Jetson Nano 2GB Developer Kit Grant Program with a goal to further democratize AI and robotics. The new program offers limited quantities of Jetson Developer Kits to professors, educators and trainers across the globe.

The post NVIDIA Partners with Boys & Girls Clubs of Western Pennsylvania on AI Pathways Program appeared first on The Official NVIDIA Blog.

Read More

Asia’s Rising Star: VinAI Advances Innovation with Vietnam’s Most Powerful AI Supercomputer

A rising technology star in Southeast Asia just put a sparkle in its AI.

Vingroup, Vietnam’s largest conglomerate, is installing the most powerful AI supercomputer in the region. The NVIDIA DGX SuperPOD will power VinAI Research, Vingroup’s machine-learning lab, in global initiatives that span autonomous vehicles, healthcare and consumer services.

One of the lab’s most important missions is to develop the AI smarts for an upcoming fleet of autonomous electric cars from VinFast, the group’s automotive division, driving its way to global markets.

New Hub on the AI Map

It’s a world-class challenge for the team led by Hung Bui. As a top-tier AI researcher and alum of Google’s DeepMind unit with nearly 6,000 citations from more than 200 papers and the winner of an International Math Olympiad in his youth, he’s up for a heady challenge.

In barely two years, Hung’s built a team that now includes 200 researchers. Last year, as a warm-up, they published as many as 20 papers at top conferences, pushing the boundaries of AI while driving new capabilities into the sprawling group’s many products.

“By July, a fleet of cars will start sending us their data from operating 24/7 in real traffic conditions over millions of miles on roads in the U.S. and Europe, and that’s just the start — the volume of data will only increase,” said Hung.

The team will harness the data to design and refine at least a dozen AI models to enable level 3 autonomous driving capabilities for VinFast’s cars.

DGX SuperPOD Behind the Wheel

Hung foresees a need to retrain those models on a daily basis as new data arrives. He believes the DGX SuperPOD can accelerate by at least 10x the AI work of the NVIDIA DGX A100 system VinAI currently uses, letting engineers update their models every 24 hours.

“That’s the goal, it will save a lot of engineering time, but we will need a lot of help from NVIDIA,” said Hung, who hopes to have in May the new cluster of 20 DGX A100 systems linked together with an NVIDIA Mellanox HDR 200Gb/s InfiniBand network.

Developing World-Class Talent

With a DGX SuperPOD in place, Hung hopes to attract and develop more world-class AI talent in Vietnam. It’s a goal shared widely at Vingroup.

In October, the company hosted a ceremony to mark the end of the initial year of studies for the first 260 students at its VinUniversity. Vietnam’s first private, nonprofit college — founded and funded by Vingroup — it so far offers programs in business, engineering, computer science and health sciences.

It’s a kind of beacon pointing to a better future, like the Landmark81 (pictured above), the 81-story skyscraper, the country’s largest, that the group built and operates on the banks of the Saigon River.

“AI technology is a way to move the company forward, and it can make a lot of impact on the lives of people in Vietnam,” he said, noting other group divisions use DGX systems to advance medical imaging and diagnosis.

VinAI researchers
VinAI researchers synch up on a project.

Making Life Better with AI

Hung has seen AI’s impact firsthand. His early work in the field at SRI International, in Silicon Valley, helped spawn the technology that powers the Siri assistant in Apple’s iPhone.

More recently, VinAI developed an AI model that lets users of VinSmart handsets unlock their phones using facial recognition — even if they’re wearing a COVID mask. At the same time, core AI researchers on his team developed Pho-BERT, a version for Vietnamese of the giant Transformer model used for natural-language processing.

It’s the kind of world-class work that two years ago Vingroup’s chairman and Vietnam’s first billionaire, Pham Nhat Vuong, wanted from VinAI Research. He personally convinced Hung to leave a position as research scientist in the DeepMind team and join Vingroup.

Navigating the AI Future

Last year to help power its efforts, VinAI became the first company in Southeast Asia to install a DGX A100 system.

“We’ve been using the latest hardware and software from NVIDIA quite successfully in speech recognition, NLP and computer vision, and now we’re taking our work to the next level with a perception system for driving,” he said.

It’s a challenge Hung gets to gauge daily amid a rising tide of pedestrians, bicycles, scooters and cars on his way to his office in Hanoi.

“When I came back to Vietnam, I had to relearn how to drive here — the traffic conditions are very different from the U.S.” he said.

“After a while I got the hang of it, but it got me thinking a machine probably will do an even better job — Vietnam’s driving conditions provide the ultimate challenge for systems trying to reach level 5 autonomy,” he added.

The post Asia’s Rising Star: VinAI Advances Innovation with Vietnam’s Most Powerful AI Supercomputer appeared first on The Official NVIDIA Blog.

Read More

Universal Scene Description Key to Shared Metaverse, GTC Panelists Say 

Artists and engineers, architects, and automakers are coming together around a new standard — born in the digital animation industry — that promises to weave all our virtual worlds together.

That’s the conclusion of a group of panelists from a wide range of industries who gathered at NVIDIA GTC21 this week to talk about Pixar’s Universal Scene Description standard, or USD.

“You have people from automotive, advertising, engineering, gaming, and software and we’re all having this rich conversation about USD,” said Perry Nightingale, head of creative at WPP, one of the world’s largest advertising and communications companies. “We’re basically experiencing this live in our work.”

Born at Pixar

Conceived at Pixar more than a decade ago and released as open-source in 2016, USD provides a rich, common language for defining, packaging, assembling and editing 3D data for a growing array of industries and applications.

For more, see “Plumbing the Metaverse with USD” by Michael Kass

Most recently, the technology has been adopted by NVIDIA to build Omniverse — a platform that creates a real-time, shared 3D world to speed collaboration among far-flung workers and even train robots, so they can more safely and efficiently work alongside humans.

The panel — moderated by veteran gaming journalist Dean Takahashi — included Martha Tsigkari, a partner at architects Foster + Partners; Mattias Wikenmalm, a senior visualization expert at Volvo Cars; WPP’s Nightingale; Lori Hufford, vice president of applications integration at engineering software company Bentley Systems; Susanna Holt, vice president at 3D software company Autodesk; and Ivar Dahlberg, a technical artist with Stockholm-based gaming studio Embark Studios.

It also featured two of the engineers who helped create the USD standard at Pixar — F. Sebastian Grass, project lead for USD at Pixar, and Guido Quaroni, now senior director of engineering of 3D and immersive at Adobe.

Joining them was NVIDIA Distinguished Engineer Michael Kass, who, along with NVIDIA’s Rev Lebaredian, helped lead the effort to build NVIDIA Omniverse.

A Sci-Fi Metaverse Come to Life

Omniverse was made to create and experience shared virtual 3D worlds, ones not unlike the science-fiction metaverse described by Neal Stephenson in his early 1990s novel “Snow Crash.” Of course, the full vision of the fictional metaverse remains in the future, but judging by the panel, it’s a future that’s rapidly approaching.

A central goal of Omniverse was to seamlessly connect together as many tools, applications and technologies as possible. To do this, Kass and Lebaredian knew they needed to represent the data using a powerful, expressive and battle-tested open standard. USD exactly fit the bill.

“The fact that you’ve built something so general and extensible that it addresses very nicely the needs of all the participants on this call — that’s an extraordinary achievement,” Kass told USD pioneers Grassia and Quaroni.

One of NVIDIA’s key additions to the USD ecosystem is a replication system. An application programmer can use the standard USD API to query a scene and alter it at will. With no special effort on the part of the programmer, the system keeps track of everything that changes.

In real time, the changes can be published to NVIDIA’s Omniverse Nucleus server, which sends them along to all subscribers. As a result, different teams in different places using different tools can work together and see each other’s changes without noticeable delay.

That technology has become invaluable in architecture, engineering and construction, where large teams from many different disciplines can now collaborate far more easily.

“You need a way for the creative people to do things that can be passed directly to the engineers and consultants in a seamless way,” Tsigkari said. “The structural engineer doesn’t care about my windows, doesn’t care about my doors.”

USD allows the structural engineer to see what they do care about.

USD and NVIDIA Omniverse provide a way to link a wide variety of specialized tools — for creatives, engineers and others — in real time.

“We do see the different industries converging and that’s not going to work if they can’t talk to one another,” said Autodesk’s Holt.

One valuable application is the ability to create product mockups in real time. For too long, Nightingale said, creative teams would have to present clients with 2D mockups of their designs because the tools used by the design teams were incompatible with those of the marketing team. Now those mockups can be in 3D and updated instantly as the design team makes changes.

Virtual Worlds Where AI, Robots and Autonomous Vehicles Can Learn 

Capabilities like these aren’t just critical for humans. USD also promises to be the foundation for virtual worlds where new products can be simulated and rigorously tested.

USD and Omniverse are at the center of NVIDIA’s DRIVE simulation platform, Kass explained, which gives automakers a sandbox where they can test new autonomous vehicles. Nothing should go out into the real world until it’s thoroughly tested in simulation, he said.

“We want all of our mistakes to happen in the virtual world, and we based that entire virtual world on USD,” Kass said.

There’s also potential for technologies like USD to allow participants in the kind of virtual worlds game makers are building to play a larger role in shaping those words in real time.

“One of the interesting things we’re seeing is how players can be part of creating a world,” Dahlberg said.

“Now there are a lot more opportunities where you create something together with the inhabitants of that world,” he added.

The first steps, however, have already been taken — thanks to USD — making it easier to exchange data about shared 3D worlds.

“If we can actually get that out of the way, when that’s easy to do, we can start building a proper metaverse,” Volvo’s Wikenmalm said.

For more, see “Plumbing the Metaverse with USD,” by Michael Kass

The post Universal Scene Description Key to Shared Metaverse, GTC Panelists Say  appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Unveils 50+ New, Updated AI Tools and Trainings for Developers

To help developers hone their craft, NVIDIA this week introduced more than 50 new and updated tools and training materials for data scientists, researchers, students and developers of all kinds.

The offerings range from software development kits for conversational AI and ray tracing, to hands-on courses from the NVIDIA Deep Learning Institute.

They’re available to all members of the NVIDIA Developer Program, a free-to-join global community of over 2.5 million technology innovators who are revolutionizing industries through accelerated computing.

Training for Success

Learning new and advanced software development skills is vital to staying ahead in a competitive job market. DLI offers a comprehensive learning experience on a wide range of important topics in AI, data science and accelerated computing. Courses include hands-on exercises and are available in both self-paced and instructor-led formats.

The five courses cover topics such as deep learning, data science, autonomous driving and conversational AI. All include hands-on exercises that accelerate learning and mastery of the material. DLI workshops are led by NVIDIA-certified instructors and include access to fully configured GPU-accelerated servers in the cloud for each participant.

New self-paced courses, which are available now:

New full-day, instructor-led workshops for live virtual classroom delivery (coming soon):

These instructor-led workshops will be available to enterprise customers and the general public. DLI recently launched public workshops for its popular instructor-led courses, increasing accessibility to individual developers, data scientists, researchers and students.

To extend training further, DLI is releasing a new book, “Learning Deep Learning,” that provides a complete guide to deep learning theory and practical applications. Authored by NVIDIA Engineer Magnus Ekman, it explores how deep neural networks are applied to solve complex and challenging problems. Pre-orders are available now through Amazon.

New and Accelerated SDKs, Plus Updated Technical Tools

SDKs are a key component that can make or break an application’s performance. Dozens of new and updated kits for high performance computing, computer vision, data science, conversational AI, recommender systems and real-time graphics are available so developers can meet virtually any challenge. Updated tools are also in place to help developers accelerate application development.

Updated tools available now:

  • NGC is a GPU-optimized hub for AI and HPC software with a catalog of hundreds of SDKs, AI, ML and HPC containers, pre-trained models and Helm charts that simplify and accelerate workflows from end to end. Pre-trained models help developers jump-start their AI projects for a variety of use cases, including computer vision and speech.

New SDK (coming soon):

  • TAO (Train, Adapt, Optimize) is a GUI-based, workflow-driven framework that simplifies and accelerates the creation of enterprise AI applications and services. Enterprises can fine-tune pre-trained models using transfer learning or federated learning to produce domain specific models in hours rather than months, eliminating the need for large training runs and deep AI expertise. Learn more about TAO.

New and updated SDKs and frameworks available now:

  • Jarvis, a fully accelerated application framework for building multimodal conversational AI services. It includes state-of-the-art models pre-trained for thousands of hours on NVIDIA DGX systems, the Transfer Learning Toolkit for adapting those models to domains with zero coding, and optimized end-to-end speech, vision and language pipelines that run in real time. Learn more.
  • Maxine, a GPU-accelerated SDK with state-of-the-art AI features for developers to build virtual collaboration and content creation applications such as video conferencing and live streaming. Maxine’s AI SDKs — video effects, audio effects and augmented reality — are highly optimized and include modular features that can be chained into end-to-end pipelines to deliver the highest performance possible on GPUs, both on PCs and in data centers. Learn more.
  • Merlin, an application framework, currently in open beta, enables the development of deep learning recommender systems — from data preprocessing to model training and inference — all accelerated on NVIDIA GPUs. Read more about Merlin.
  • DeepStream, an AI streaming analytics toolkit for building high-performance, low-latency, complex video analytics apps and services.
  • Triton Inference Server, which lets teams deploy trained AI models from any framework, from local storage or cloud platform on any GPU- or CPU-based infrastructure.
  • TensorRT, for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for deep learning inference applications. TensorRT 8 is 2x faster for transformer-based models and new techniques to achieve accuracy similar to FP32 while using high-performance INT8 precision.
  • RTX technology, which helps developers harness and bring realism to their games:
    • DLSS is a deep learning neural network that helps graphics developers boost frame rates and generates beautiful, sharp images for their projects. It includes performance headroom to maximize ray-tracing settings and increase output resolution. Unity has announced that DLSS will be natively supported in Unity Engine 2021.2.
    • RTX Direct Illumination (RTXDI) makes it possible to render, in real time, scenes with millions of dynamic lights without worrying about performance or resource constraints.
    • RTX Global Illumination (RTXGI) leverages the power of ray tracing to scalably compute multi-bounce indirect lighting without bake times, light leaks or expensive per-frame costs.
    • Real-Time Denoisers (NRD) is a spatio-temporal API-agnostic denoising library that’s designed to work with low ray-per-pixel signals.

Joining the NVIDIA Developer Program is easy, check it out today.

The post NVIDIA Unveils 50+ New, Updated AI Tools and Trainings for Developers appeared first on The Official NVIDIA Blog.

Read More

AI and 5G to Fuel Next Wave of IoT Services, Says GTC Panel of Telecom Experts

The rollout of 5G for edge AI services promises to fuel a magic carpet ride into the future for everything from autonomous vehicles, to supply chains and education.

That was a key takeaway from a panel of five 5G experts speaking at NVIDIA’s GPU Technology Conference this week.

With speed boosts up to 10x that of 4G, 5G will offer game-changing features to cellular networks, such as low latency, improved reliability and built-in security. It will also radically improve AI services, such as online gaming, those provided by AVs, and robots used for logistics. In addition, AI on 5G could help deliver services like online learning and micro banking to remote regions of underdeveloped parts of the world today.

Executives from Verizon, Wind River, Mavenir, Google and NVIDIA shared their views on the wide-ranging impact 5G will have on edge AI services. And if just half of their predictions appear within the next decade, the future promises exciting times.

Enhance Human Experience

The next generation of applications is going to enhance the human experience and create new opportunities, said Ganesh Harinath, VP and CTO of 5G MEC and AI platforms at Verizon. But he said the networking requirements for the future call for edge computing.

“The inferencing aspect of machine learning has to be moved closer and closer to where the signals are generated,” said Harinath.

Propel Digital World

Nermin Mohamed, head of telco solutions at embedded systems software provider Wind River, said that 5G, AI and edge computing are “the three magic words that will propel the digital connected world.”

She said that companies are looking at 5G as an accelerator for their revenue and that the rollout of 5G grew four times faster than 4G over the past 18 months.

Bridge Digital Divide

The availability of 5G will usher in digital services to remote places, bridging the digital divide, said Pardeep Kohli, president and CEO of telecom software company Mavenir.

With 5G “you can have low latency and a good experience where this type of connectivity can be used for having an education” where it might otherwise not be available, said Kohli.

Reshape Telecom, Edge

Open ecosystems are key to encouraging developers to build applications, said Shailesh Shukla, vice president and general manager for Networking and Telecom at Google Cloud

“With the advent of 5G and AI, there is an opportunity now to reshape the broader telecom infrastructure and the edge industry by doing something very similar to what was done with Google and Android,” Shukla said.

‘Heady Mix Ahead’

A lot of the applications — autonomous vehicles, augmented and virtual reality — have been restrained by network limitations, said Ronnie Vasishta, NVIDIA senior vice president for Telecoms. NVIDIA has been investing in GPU and DPU platforms for accelerated compute to support the ecosystem of edge AI applications and telecom partners, he said.

“Sometimes we underestimate the impact that 5G will have on our lives,” he said. “We’re really in for a heady mix ahead of us with the combination of AI and 5G.”

The panel discussion, “Is AI at the Edge the Killer App for 5G?,” is available for replay. 

The post AI and 5G to Fuel Next Wave of IoT Services, Says GTC Panel of Telecom Experts appeared first on The Official NVIDIA Blog.

Read More

EV Technology Goes into Hyperdrive with Mercedes-Benz EQS

Mercedes-Benz is calling on its long heritage of luxury to accelerate electric vehicle technology with the new EQS sedan.

The premium automaker lifted the wraps off the long-awaited flagship EV during a digital event today. The focal point of the revolutionary vehicle is the MBUX Hyperscreen, a truly intuitive and personalized AI cockpit, powered by NVIDIA.

The EQS is the first Mercedes-Benz to feature the “one bow” design, resembling a high-speed bullet train to increase efficiency as well as provide a quiet, comfortable interior experience.

The cabin is further transformed by the MBUX Hyperscreen — a single, 55-inch surface extending from the cockpit to the passenger seat. It delivers both safety and convenience by displaying all necessary functions at once.

Like the MBUX system recently unveiled with the new Mercedes-Benz S-Class, this extended-screen system runs on the high-performance, energy-efficient NVIDIA DRIVE platform for instantaneous AI processing and sharp graphics.

“The EQS is high tech in a true luxury shell,” said Ola Källenius, chairman of the Mercedes-Benz Board of Management.

With NVIDIA’s high-performance, energy-efficient compute, Mercedes-Benz was able to consolidate the varied and distributed cockpit components into one AI platform — with three separate screens under one glass surface — to simplify the architecture while creating more space to add new features.

Intelligence in Many Flavors

The MBUX Hyperscreen makes it easy to focus on the road ahead, yet delivers beautiful graphics for when attention to driving isn’t necessary.

Leveraging a “zero layer” design concept, the display features 90 percent of functions drivers and passengers need right on the surface, reducing the driver’s reliance on buttons or voice commands. An augmented reality heads-up display provides clear, 3D, turn-by-turn navigation, keeping drivers focused.

The deep neural networks powering the system process datasets such as vehicle position, cabin temperature and time of day to prioritize certain features — like entertainment or points of interest recommendations — while always keeping navigation at the center of the display.

The EQS will be capable of level 3 automated driving with Mercedes-Benz DRIVE PILOT. For times when the driver’s attention doesn’t need to be on the road, the MBUX Hyperscreen provides crystal-clear graphics as well as an intelligent voice assistant for the utmost convenience.

The map feature allows drivers to view their route in 3D, down to the tiniest detail. It can also factor battery capacity, weather conditions and topography into route planning, suggesting charging points along the way if needed. Front-seat passengers also get a dedicated screen for entertainment and ride information that doesn’t interfere with the driver’s display. It also enables the front seat passenger to share content with others in the car.

“The MBUX Hyperscreen surprises with intelligence in many flavors,” said Sajjad Khan, executive vice president at Mercedes-Benz.

And with high-performance NVIDIA compute at MBUX Hyperscreen’s core, users can seamlessly experience these flavors, toggling between features without experiencing any lag or delay.

Ahead of the Curve

Equipped with the most powerful battery in the industry, providing an estimated 478 miles of range and 516 horsepower, the EQS was designed to lead its class in every metric.

The sedan’s sleek design optimizes aerodynamics for lightning-quick acceleration — it can bolt from 0 to 60 mph in 4 seconds — while maintaining battery efficiency and reducing cabin noise.

Taking cues from its internal combustion engine sibling, the Mercedes-Benz S-Class, the EQS boasts the largest interior of any electric sedan on the market. The vehicle can recognize the driver either using facial recognition or fingerprint scanner, and adjust seating and climate settings to personal preferences. It also features customizable ambient lighting for whatever the mood.

The EQS is slated to arrive at U.S. dealerships this summer, ushering in a new generation of intelligent, electric luxury vehicles.

The post EV Technology Goes into Hyperdrive with Mercedes-Benz EQS appeared first on The Official NVIDIA Blog.

Read More

Knight Rider Rides a GAN: Bringing KITT to Life with AI, NVIDIA Omniverse

Fasten your seatbelts. NVIDIA Research is revving up a new deep learning engine that creates 3D object models from standard 2D images — and can bring iconic cars like the Knight Rider’s AI-powered KITT to life — in NVIDIA Omniverse.

Developed by the NVIDIA AI Research Lab in Toronto, the GANverse3D application inflates flat images into realistic 3D models that can be visualized and controlled in virtual environments. This capability could help architects, creators, game developers and designers easily add new objects to their mockups without needing expertise in 3D modeling, or a large budget to spend on renderings.

A single photo of a car, for example, could be turned into a 3D model that can drive around a virtual scene, complete with realistic headlights, tail lights and blinkers.

To generate a dataset for training, the researchers harnessed a generative adversarial network, or GAN, to synthesize images depicting the same object from multiple viewpoints — like a photographer who walks around a parked vehicle, taking shots from different angles. These multi-view images were plugged into a rendering framework for inverse graphics, the process of inferring 3D mesh models from 2D images.

Once trained on multi-view images, GANverse3D needs only a single 2D image to predict a 3D mesh model. This model can be used with a 3D neural renderer that gives developers control to customize objects and swap out backgrounds.

When imported as an extension in the NVIDIA Omniverse platform and run on NVIDIA RTX GPUs, GANverse3D can be used to recreate any 2D image into 3D — like the beloved crime-fighting car KITT, from the popular 1980s Knight Rider TV show.

Previous models for inverse graphics have relied on 3D shapes as training data.

Instead, with no aid from 3D assets, “We turned a GAN model into a very efficient data generator so we can create 3D objects from any 2D image on the web,” said Wenzheng Chen, research scientist at NVIDIA and lead author on the project.

“Because we trained on real images instead of the typical pipeline, which relies on synthetic data, the AI model generalizes better to real-world applications,” said NVIDIA researcher Jun Gao, an author on the project.

The research behind GANverse3D will be presented at two upcoming conferences: the International Conference on Learning Representations in May, and the Conference on Computer Vision and Pattern Recognition, in June.

From Flat Tire to Racing KITT 

Creators in gaming, architecture and design rely on virtual environments like the NVIDIA Omniverse simulation and collaboration platform to test out new ideas and visualize prototypes before creating their final products. With Omniverse Connectors, developers can use their preferred 3D applications in Omniverse to simulate complex virtual worlds with real-time ray tracing.

But not every creator has the time and resources to create 3D models of every object they sketch. The cost of capturing the number of multi-view images necessary to render a showroom’s worth of cars, or a street’s worth of buildings, can be prohibitive.

That’s where a trained GANverse3D application can be used to convert standard images of a car, a building or even a horse into a 3D figure that can be customized and animated in Omniverse.

To recreate KITT, the researchers simply fed the trained model an image of the car, letting GANverse3D predict a corresponding 3D textured mesh, as well as different parts of the vehicle such as wheels and headlights. They then used NVIDIA Omniverse Kit and NVIDIA PhysX tools to convert the predicted texture into high-quality materials that give KITT a more realistic look and feel, and placed it in a dynamic driving sequence.

“Omniverse allows researchers to bring exciting, cutting-edge research directly to creators and end users,” said Jean-Francois Lafleche, deep learning engineer at NVIDIA. “Offering GANverse3D as an extension in Omniverse will help artists create richer virtual worlds for game development, city planning or even training new machine learning models.”

GANs Power a Dimensional Shift

Because real-world datasets that capture the same object from different angles are rare, most AI tools that convert images from 2D to 3D are trained using synthetic 3D datasets like ShapeNet.

To obtain multi-view images from real-world data — like images of cars available publicly on the web — the NVIDIA researchers instead turned to a GAN model, manipulating its neural network layers to turn it into a data generator.

The team found that opening the first four layers of the neural network and freezing the remaining 12 caused the GAN to render images of the same object from different viewpoints.

Keeping the first four layers frozen and the other 12 layers variable caused the neural network to generate different images from the same viewpoint. By manually assigning standard viewpoints, with vehicles pictured at a specific elevation and camera distance, the researchers could rapidly generate a multi-view dataset from individual 2D images.

The final model, trained on 55,000 car images generated by the GAN, outperformed an inverse graphics network trained on the popular Pascal3D dataset.

Read the full ICLR paper, authored by Wenzheng Chen, fellow NVIDIA researchers Jun Gao and Huan Ling, Sanja Fidler, director of NVIDIA’s Toronto research lab, University of Waterloo student Yuxuan Zhang, Stanford student Yinan Zhang and MIT professor Antonio Torralba. Additional collaborators on the CVPR paper include Jean-Francois Lafleche, NVIDIA researcher Kangxue Yin and Adela Barriuso.

The NVIDIA Research team consists of more than 200 scientists around the globe, focusing on areas such as AI, computer vision, self-driving cars, robotics and graphics. Learn more about the company’s latest research and industry breakthroughs in NVIDIA CEO Jensen Huang’s keynote address at this week’s GPU Technology Conference.

GTC registration is free, and open through April 23. Attendees will have access to on-demand content through May 11.

Knight Rider content courtesy of Universal Studios Licensing LLC. 

The post Knight Rider Rides a GAN: Bringing KITT to Life with AI, NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

NVIDIA RTX Lights Up the Night in Stunning Demos at GTC

NVIDIA is putting complex night scenes in a good light.

A demo at GTC21 this week showcased how NVIDIA RTX Direct Illumination (RTXDI) technology is paving the way for realistic lighting in graphics. The clip shows thousands of dynamic lights as they move, turn on and off, change color, show reflections and cast shadows.

People can also experience the latest technologies in graphics with the new RTX Technology Showcase, a playable demo that allows developers to explore an attic scene and interact with elements while seeing the visual impact of real-time ray tracing.

Hero Lighting Gets a Boost with RTXDI

Running on an NVIDIA GeForce RTX 3090 GPU, the RTXDI demo shows how dynamic, animated lights can be rendered in real time.

Creating realistic night scenes in computer graphics requires lights to be simulated all at once. RTXDI does this by allowing developers and artists to create cinematic visuals with realistic lighting, incredible reflections and accurate shadows through real-time ray tracing.

Traditionally, creating realistic lighting required complex baking solutions and was limited to a small number of “hero” lights. RTXDI removes such barriers by combining ray tracing and a deep learning algorithm called spatio-temporal importance resampling (ReSTIR) to create realistic dynamic lighting.

Developers and artists can now easily integrate animated and color-changing lights into their scenes, without baking or relying on just a handful of hero lights.

Based on NVIDIA research, RTXDI enables direct lighting from millions of moving light sources, without requiring any complex data structures to be built. From fireworks in the sky to billboards in New York Times Square, all of that complex lighting can now be captured in real time with RTXDI.

And RTXDI works even better when combined with additional NVIDIA technology, such as:

Learn more and check out RTXDI, which is now available.

Hit the Light Spots in RTX Technology Showcase

The RTX Technology Showcase features discrete ray-tracing capabilities, so users can choose to turn on specific technologies and immediately view their effects within the attic scene.

Watch the RTX Technology Showcase in action:

Developers can download the demo to discover the latest and greatest in ray-tracing innovations with RTX Technology Showcase.

Check out other GTC demos that highlight the latest technologies in graphics, and a full track for game developers here. Watch a replay of the GTC keynote address by NVIDIA CEO Jensen Huang to catch up on the latest graphics announcements.

The post NVIDIA RTX Lights Up the Night in Stunning Demos at GTC appeared first on The Official NVIDIA Blog.

Read More