Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering

The journey to making the upcoming film Gods of Mars changed course dramatically once real-time rendering entered the picture.

The movie, currently in production, features a mix of cinematic visual effects with live-action elements. The film crew had planned to make the movie primarily using real-life miniature figures. But they switched gears once they experienced the power of real-time NVIDIA RTX graphics and Unreal Engine.

Director Peter Hyoguchi and producer Joan Webb used an Epic MegaGrant from Epic Games to bring together VFX professionals and game developers to create the film. The virtual production started with scanning the miniature models and animating them in Unreal Engine.

“I’ve been working as a CGI and VFX supervisor for 20 years, and I never wanna go back to older workflows,” said Hyoguchi. “This is a total pivot point for the next 100 years of cinema — everyone is going to use this technology for their effects.”

Hyoguchi and team produced rich, photorealistic worlds in 4K to create rich, intergalactic scenes using a combination of NVIDIA Quadro RTX 6000 GPU-powered Lenovo ThinkStation P920 workstations, ASUS ProArt Display PA32UCX-P monitors, Blackmagic Design cameras and DaVinci Resolve, and the Wacom Cintiq Pro 24.

Stepping Outside the Ozone: Technology Makes Way for More Creativity

Gods of Mars tells the tale of a fighter pilot who leads a team against rebels in a battle on Mars. The live-action elements of the film are supported by LED walls with real-time rendered graphics created from Unreal Engine. Actors are filmed on-set, with a virtual background projected behind them.

To keep the set minimal, the team only builds what actors will physically interact with, and then uses the projected environment from Unreal Engine for the rest of the scenes.

One big advantage of working with digital environments and assets is real-time lighting. When previously working with CGI, Hyoguchi and his team would pre-visualize everything inside a grayscale environment. Then they’d wait hours for one frame to render before seeing a preview of what an image or scene would look like.

With Unreal Engine, Hyoguchi can have scenes ray-trace rendered immediately with lights, shadows and colors. He can move around the environment and see how everything would look in the scene, saving weeks of pre-planning.

Real-time rendering also saves money and resources. Hyoguchi doesn’t need to spend thousands of dollars for render farms, or wait weeks for one shot to complete rendering. The RTX-powered ThinkStation P920 renders everything in real time, which leads to more iterations, making way for a much more efficient, flexible and faster creative workflow.

“Ray tracing is what makes this movie possible,” said Hyoguchi. “With NVIDIA RTX and the ability to do real-time ray tracing, we can make a movie with low cost and less people, and yet I still have the flexibility to make more creative choices than I’ve ever had in my life.”

Hyoguchi and his team are shooting the film with Blackmagic Design’s new URSA Mini Pro 12K camera. Capturing such high-resolution footage provides more options in post-production. They can crop images or zoom in for a close-up shot of an actor without worrying about losing resolution.

They can also color and edit scenes in real time using Blackmagic DaVinci Resolve Studio, which uses NVIDIA GPUs to accelerate editing workflows. With the 32-inch ASUS ProArt Display PA32UCX-P monitors, the team calibrated their screens so all the artists can see the same rendered color and details, even while working in different locations across the country.

The Wacom Cintiq Pro 24 pen displays speed up the 3D artist’s workflow, and provides a natural connection between the artist and the Unreal editor, both when moving scene elements around to create the 3D environment and when keyframing actors for animation.

Learn more about Gods of Mars and NVIDIA RTX.

The post Out of This World Graphics: ‘Gods of Mars’ Come Alive with NVIDIA RTX Real-Time Rendering appeared first on The Official NVIDIA Blog.

Read More

New Year, New Energy: Leading EV Makers Kick Off 2021 with NVIDIA DRIVE

Electric vehicle upstarts have gained a foothold in the industry and are using NVIDIA DRIVE to keep that momentum going.

Nowhere is the trend of electric vehicles more apparent than in China, the world’s largest automotive market, where electric vehicle startups have exploded in popularity. NIO, Li Auto and Xpeng are bolstering the initial growth in new energy vehicles with models that push the limits of everyday driving with extended battery range and AI-powered features.

All three companies doubled their sales in 2020, with a combined volume of more than 103,000 vehicles.

Along with more efficient powertrains, these fleets are also introducing new and intelligent features to daily commutes with NVIDIA DRIVE.

NIO Unveils a Supercharged Compute Platform

Last week, NIO announced a supercomputer to power its automated and autonomous driving features, with NVIDIA DRIVE Orin at its core.

The computer, known as Adam, achieves over 1,000 trillion operations per second (TOPS) of performance with the redundancy and diversity necessary for safe autonomous driving. It also enables personalization in the vehicle, learning from individual driving habits and preferences while continuously improving from fleet data.

The Orin-powered supercomputer will debut in the flagship ET7 sedan, scheduled for production in 2022, and will be in every NIO model to follow.

The NIO ET7, powered by NVIDIA DRIVE Orin.

The ET7 leapfrogs current model capabilities, with more than 600 miles of battery range and advanced autonomous driving. As the first vehicle equipped with Adam, the EV can perform point-to-point autonomy, leveraging 33 sensors and high-performance compute to continuously expand the domains in which it operates  — from urban to highway driving to battery swap stations.

With this centralized, software-defined computing architecture, NIO’s future fleet of EVs will feature the latest AI-enabled capabilities designed to make its vehicles perpetually upgradable.

Li Auto Powers Ahead

In September, standout EV maker Li Auto said it would develop its next generation of electric vehicles using NVIDIA DRIVE AGX Orin.

These new vehicles are being developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

This high-performance platform will enable Li Auto to deploy an independent, advanced autonomous driving system with its next-generation fleet.

The automaker began rolling out its first vehicle, the Li Auto One SUV, in November 2019. Since then, sales have skyrocketed, with a 530 percent increase in volume in December, year-over-year, and a total of 32,624 vehicles in 2020.

The Li Auto One

Li Auto plans to continue this momentum with its upcoming models, packed with even more intelligent features enabled by NVIDIA DRIVE.

Cruising on Xpeng XPilot

Xpeng has been developing on DRIVE since 2018, developing a level 3 autopilot system in collaboration with Desay.

The technology debuted last April with the Xpeng P7, an all-electric sports sedan developed from the ground up for an intelligent driving future.

The Xpeng P7

The XPilot 3.0 level 3 autonomous driving system leverages NVIDIA DRIVE AGX Xavier as well as a redundant and diverse halo of sensors for automated highway driving and valet parking. XPilot was born in the data center, with NVIDIA’s AI infrastructure for training and testing self-driving deep neural networks.

With high-performance data center GPUs and advanced AI learning tools, this scalable infrastructure allows developers to manage massive amounts of data and train autonomous driving DNNs.

The burgeoning EV market is driving the next decade of personal transportation. And with NVIDIA DRIVE at the core, these vehicles have the intelligence and performance to go the distance.

The post New Year, New Energy: Leading EV Makers Kick Off 2021 with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Adam and EV: NIO Selects NVIDIA for Intelligent, Electric Vehicles

Chinese electric automaker NIO will use NVIDIA DRIVE for advanced automated driving technology in its future fleets, marking the genesis of truly intelligent and personalized NIO vehicles.

During a global reveal event, the EV maker took the wraps off its latest ET7 sedan, which starts shipping in 2022 and features a new NVIDIA-powered supercomputer, called Adam, that uses NVIDIA DRIVE Orin to deploy advanced automated driving technology.

“The cooperation between NIO and NVIDIA will accelerate the development of autonomous driving on smart vehicles,” said NIO CEO William Li. “NIO’s in-house developed autonomous driving algorithms will be running on four industry-leading NVIDIA Orin processors, delivering an unprecedented 1,000+ trillion operations per second in production cars.”

The announcement marks a major step toward the widespread adoption of intelligent, high-performance electric vehicles, improving standards for both the environment and road users.

NIO has been a pioneer in China’s premium smart electric vehicle market. Since 2014, the automaker has been leveraging NVIDIA for its seamless infotainment experience. And now, with NVIDIA DRIVE powering automated driving features in its future vehicles, NIO is set to redefine mobility with continuous improvement and personalization.

“Autonomy and electrification are the key forces transforming the automotive industry,” said Jensen Huang, NVIDIA founder and CEO. “We are delighted to partner with NIO, a leader in the new energy vehicle revolution—leveraging the power of AI to create the software-defined EV fleets of the future.”

An Intelligent Creation

Software-defined and intelligent vehicles require a centralized, high-performance compute architecture to power AI features and continuously receive upgrades over the air.

The new NIO Adam supercomputer is one of the most powerful platforms to run in a vehicle. With four NVIDIA DRIVE Orin processors, Adam achieves more than 1,000 TOPS of performance.

Orin is the world’s highest-performance, most-advanced AV and robotics processor. This supercomputer on a chip is capable of delivering up to 254 TOPS to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while achieving systematic safety standards such as ISO 26262 ASIL-D.

By using multiple SoCs, Adam integrates the redundancy and diversity necessary for safe autonomous operation. The first two SoCs process the 8 gigabytes of data produced by the vehicle’s sensor set every second. The third Orin serves as a backup to ensure the system can still operate safely in any situation, while the fourth enables local training, improving the vehicle with fleet learning as well as personalizing the driving experience based on individual user preferences.

With high-performance compute at its core, Adam is a major achievement in the creation of automotive intelligence and autonomous driving.

Meet the ET7

NIO took the wraps off its much-anticipated ET7 sedan — the production version of its original EVE concept first shown in 2017.

The flagship vehicle leapfrogs current model capabilities, with more than 600 miles of battery range and advanced autonomous driving. As the first vehicle equipped with Adam, the ET7 can perform point-to-point autonomy, leveraging 33 sensors and high-performance compute to continuously expand the domains in which it operates  — from urban to highway driving to battery swap stations.

The intelligent sedan ensures a seamless experience from the moment the driver approaches the car. With a highly accurate digital key and soft-closing doors, users can open the car with a gentle touch. Enhanced driver monitoring and voice recognition enable easy interaction with the vehicle. And sensors on the bottom of the ET7 detect the road surface so the vehicle can automatically adjust the suspension for a smoother ride.

With AI now at the center of the NIO driving experience, the ET7 and upcoming NVIDIA-powered models are heralding the new generation of intelligent transportation.

The post Adam and EV: NIO Selects NVIDIA for Intelligent, Electric Vehicles appeared first on The Official NVIDIA Blog.

Read More

Mercedes-Benz Transforms Vehicle Cockpit with NVIDIA-Powered AI

The AI cockpit has reached galactic proportions with the new Mercedes-Benz MBUX Hyperscreen.

During a digital event, the luxury automaker unveiled the newest iteration of its intelligent infotainment system — a single surface extending from the cockpit to the passenger seat displaying all necessary functions at once. Dubbed the MBUX Hyperscreen, the system is powered by NVIDIA technology and shows how AI can create a truly intuitive and personalized experience for both the driver and passengers.

“The MBUX Hyperscreen reinvents how we interact with the car,” said Sajjad Khan, executive vice president at Mercedes-Benz. “It’s the nerve center that connects everyone in the car with the world.”

Like the MBUX system recently unveiled with the new Mercedes-Benz S-Class, this extended-screen system runs on high-performance, energy-efficient NVIDIA GPUs for instantaneous AI processing and sharp graphics.

A vehicle’s cockpit typically requires a collection of electronic control units and switches to perform basic functions, such as powering entertainment or adjusting the temperature. Using NVIDIA technology, Mercedes-Benz consolidated these components into one AI platform — with three separate screens under one glass surface — to simplify the architecture while creating more space to add new features.

“Zero Layer” User Interface

The driving principle behind the MBUX Hyperscreen is that of the “zero layer” — every necessary driving feature is delivered with a single touch.

However, developing the largest screen ever mounted in a series-built Mercedes-Benz was not enough to achieve this groundbreaking capability. The automaker also leveraged AI to promote commonly used features at relevant times while pushing those not needed to the background.

The deep neural networks powering the system process datasets such as vehicle position, cabin temperature and time of day to prioritize certain features — like entertainment or points of interest recommendations — while always keeping navigation at the center of the display.

“The system always knows what you want and need based on emotional intelligence,” Khan explained.

And these features aren’t just for the driver. Front-seat passengers get a dedicated screen for entertainment and ride information that doesn’t interfere with the driver’s display. It also enables the front seat passenger to share content with others in the car.

Experience Intelligence

This revolutionary AI cockpit experience isn’t a mere concept — it’s real technology that will be available in production vehicles this year.

The MBUX Hyperscreen will debut with the all-electric Mercedes-Benz EQS, combining electric and artificial intelligence. With the first generation MBUX now in 1.8 million cars, the next iteration coming in the redesigned S-Class, and now, the MBUX Hyperscreen in the EQS, customers will have a range of AI cockpit options.

And with the entire MBUX family powered by NVIDIA, these systems will constantly deliver new, surprising and intelligent features with high performance and a seamless experience.

The post Mercedes-Benz Transforms Vehicle Cockpit with NVIDIA-Powered AI appeared first on The Official NVIDIA Blog.

Read More

In a Quarantine Slump? How One High School Student Used AI to to Stay on Track

Canadian high schooler Ana DuCristea has a clever solution for the quarantine slump.

Using AI and natural language processing, she programmed an app capable of setting customizable reminders so you won’t miss any important activities, like baking banana bread or whipping up Dalgona coffee.

The project’s emblematic of how a new generation – with access to powerful technology and training — approaches the once exotic domain of AI.

A decade ago, deep learning was the stuff of elite research labs with big budgets.

Now it’s the kind of thing a smart, motivated high school student can knock out to solve a tangible problem.

DuCristea’s been interested in coding from childhood, and spends her spare time teaching herself new skills and taking online AI courses. After winning a Jetson Nano Developer Kit this summer at AI4ALL, an AI camp, she set to work remedying one of her pet peeves — the limited functionality of reminder applications.

She’s long envisioned a more useful app that could snooze for more specific lengths of time, and set reminders for specific tasks, dates and times. Using the Nano and her background on Python, DuCristea spent her after-school hours creating an app that does just that.

With the app, users can message a bot on Discord requesting a reminder for a specific task, date and time. DuCristea has shared the app’s code on Github, and is planning to continue training it to increase its accuracy and capabilities.

Key Points From This Episode:

Her first hands-on experience with the Jetson Nano has only strengthened her intent to pursue software or computer engineering at college, where she’ll continue to learn more about what area of STEM she’d like to focus on.

  • DuCristea’s interest in programming and electronics started at age nine, when her father gifted her a book on Python and she found it so interesting that she worked through it in a week. Since then, she’s taken courses on coding and shares her most recent projects on GitHub.
  • Programming the app took some creativity, as DuCristea didn’t have a large dataset to train on. After trying neural networks and vectorization, she eventually found that template searches worked best for her limited list of examples.

Tweetables:

“There’s so many programs, even exclusively for girls now in STEM — I would say go for them.” — Ana DuCristea [14:55]

“The Jetson Nano is a lot more accessible than most things in AI right now.” — Ana DuCristea [18:51]

You Might Also Like:

AI4Good: Canadian Lab Empowers Women in Computer Science

Doina Precup, associate professor at McGill University and research team lead at AI startup DeepMind, speaks about her personal experiences, along with the AI4Good Lab she co-founded to give women more access to machine learning training.

Jetson Interns Assemble! Interns Discuss Amazing AI Robots They’re Building

NVIDIA’s Jetson interns, recruited at top robotics competitions, discuss what they’re building with NVIDIA Jetson, including a delivery robot, a trash-disposing robot and a remote control car to aid in rescue missions.

A Man, a GAN and a 1080 Ti: How Jason Antic Created ‘De-Oldify’

Jason Antic explains how he created his popular app, De-Oldify, with just an NVIDIA GeForce 1080 Ti and a generative adversarial network. The tool colors old black-and-white shots for a more modern look.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post In a Quarantine Slump? How One High School Student Used AI to to Stay on Track appeared first on The Official NVIDIA Blog.

Read More

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue

Penn State University pals Brad Bogolea and Mirza Shah were living in Silicon Valley when they pitched Jeff Gee on their robotics concepts. Fortunately for them, the star designer was working at the soon-to-shutter Willow Garage robotics lab.

So the three of them — Shah was also a software engineer at Willow — joined together and in 2014 founded Simbe Robotics.

The startup’s NVIDIA Jetson-powered bot, dubbed Tally, has since rolled into more than a dozen of the world’s largest retailers. The multitasking robot can navigate stores, scan barcodes and track as many as 30,000 items an hour.

Running on Jetson enables Tally to be more efficient — it can process data from several cameras and perform onboard deep computer vision algorithms. This powerful edge AI capability enhances Tally’s data capture and processing, providing Simbe’s customers with inventory and shelf information more quickly and seamlessly while minimizing costs.

Tally makes rounds to scan store inventory up to three times a day, increasing product availability and boosting sales for retailers through reduced out of stocks, according to the company.

“We’re providing critical information on what products are not on the shelf, which products might be misplaced or mispriced and up-to-date location and availability,” said Bogolea, Simbe’s CEO.

Forecasting Magic

Using Tally, retail stores are able to better understand what’s happening on store shelves, helping them recognize missed sale opportunities and the benefits of improved inventory management, said Bogolea.

Tally’s inventory data enables its retail partners to offer better visibility to store employees and customers about what’s on store shelves — even before they enter a store.

At Schnuck Markets, for example, where Tally is deployed in 62 stores across the midwest, the retailer integrates Tally’s product location and availability into the store’s loyalty app. This allows customers and Instacart shoppers to determine a store’s availability of products and find their precise locations while shopping.

This data has been helpful with addressing the surge in online shopping under COVID-19, enabling faster order picking through services like Instacart, helping to more quickly fulfill orders.

“Those that leverage technology and data in retail are really going to separate themselves from the rest of the pack,” said Bogolea.

There’s an added benefit for store employees, too: workers who were previously busy taking inventory can now focus on other tasks like improving customer service.

In addition to Schnucks, the startup has deployments with Carrefour, Decathlon Sporting Goods, Groupe Casino and Giant Eagle.

Cloud-to-Edge AI 

AI is the key technology enabling the Tally robots to navigate autonomously in a dynamic environment, analyze the vast amount of information collected by its sensors and report a wide range of metrics such as inventory levels, pricing errors and misplaced stock.

Simbe is using NVIDIA GPUs from the cloud to the edge, helping to train and inference a variety of AI models that can detect the different products on shelves, read barcodes and price labels and detect obstacles.

Analyzing the vast amount of 2D and 3D sensor data collected from the robot, NVIDIA Jetson has enabled extreme optimization of the Tally data capture system and has also helped with localization, according to the company.

Running Jetson on Tally, Simbe is able to process data locally in real time from lidar as well as 2D and 3D cameras to aid in both product identification and navigation. And Jetson has reduced its reliance on processing in the cloud.

“We’re capturing at a far greater frequency and fidelity than has really ever been seen before,” said Bogolea.

“One of the benefits of leveraging NVIDIA Jetson is it gives us a lot of flexibility to start moving more to the edge, reducing our cloud costs.”

Learn more about NVIDIA Jetson, which is used by enterprise customers, developers and DIY enthusiasts for creating AI applications, as well as students and educators for learning and teaching AI.

The post AI on the Aisles: Startup’s Jetson-powered Inventory Management Boosts Revenue appeared first on The Official NVIDIA Blog.

Read More

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound

Brendon Cassidy, CTO and chief scientist at Super Hi-Fi, uses AI to give everyone the experience of a radio station tailored to their unique tastes.

Super Hi-Fi, an AI startup and member of the NVIDIA Inception program, develops technology that produces smooth transitions, intersperses content meaningfully and adjusts volume and crossfade. Started three years ago, Super Hi-Fi first partnered with iHeartRadio and is now also used by companies such as Peloton and Sonos.

Results are showing that users like this personalized approach. Cassidy notes that they tested MagicStitch, one of their tools that eliminates the gap between songs, and found that customers listening with MagicStitch turned on spent 10 percent more time streaming music.

Cassidy’s a veteran of the music industry — from Virgin Digital to the Wilshire Media Group — and recognizes this music experience is finally possible due to GPU acceleration, accessible cloud resources and AI powerful enough to process and learn from music and audio content from around the world.

Key Points From This Episode:

  • Cassidy, a radio DJ during his undergraduate and graduate careers, notes how difficult it is to “hit the post” — or to stop speaking just as the singing of the next song begins. Super Hi-Fi’s AI technology is using deep learning to understand and achieve that timing.
  • Super Hi-Fi’s technology is integrated into the iHeartRadio app, as well as Sonos Radio stations. Cassidy especially recommends the “Encyclopedia of Brittany” station, which is curated by Alabama Shakes’ musician Brittany Howard and integrates commentary and music.

Tweetables:

“This AI is trying to create a form of art in the listening experience.” — Brendon Cassidy [14:28]

“I hope we’re improving the enjoyment that listeners are getting from all of the musical experiences that we have.” — Brendon Cassidy [28:55]

You Might Also Like:

How Yahoo Uses AI to Create Instant eSports Highlight Reels

Like any sports fan, eSports followers want highlight reels of their kills and thrills as soon as possible, whether it’s StarCraft II, League of Legends or Heroes of the Storm. Yale Song, senior research scientist at Yahoo! Research, explains how AI can make instant eSports highlight reels.

Pierre Barreau Explains How Aiva Uses Deep Learning to Make Music

AI systems have been trained to take photos and transform them into the style of great artists, but now they’re learning about music. Pierre Barreau, head of Luxembourg-based startup Aiva Technologies, talks about the soaring music composed by an AI system — and used as the theme song of the AI Podcast.

How Tattoodo Uses AI to Help You Find Your Next Tattoo

What do you do when you’re at a tattoo parlor but none of the images on the wall strike your fancy? Use Tattoodo, an app that uses deep learning to help create a personalized tattoo.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Hey, Mr. DJ: Super Hi-Fi’s AI Applies Smarts to Sound appeared first on The Official NVIDIA Blog.

Read More

Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020

Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020

Much of 2020 may look best in the rearview mirror, but the year also held many moments of outstanding work, gems worth hitting the rewind button to see again.

So, here’s a countdown — roughly in order of ascending popularity — of 10 favorite NVIDIA videos that hit YouTube in 2020. With two exceptions for videos that deserve a wide audience, all got at least 200,000 views and most, but not all, can be found on the NVIDIA YouTube channel.

#10 Coronavirus Gets a Close-Up

The pandemic was clearly the story of the year.

We celebrated the work of many healthcare providers and researchers pushing science forward to combat it, including the team that won a prestigious Gordon Bell award for using high performance computing and AI to see how the coronavirus works, something they explained in detail in their own video here.

In another one of the many responses to COVID-19, the Folding@Home project received donations of time on more than 200,000 NVIDIA GPUs to study the coronavirus. Using NVIDIA Omniverse, we created a visualization (described below) of data they amassed on their virtual exascale computer.

#9 Cruising into a Ray-Traced Future

Despite the challenging times, many companies continued to deliver top-notch work. For example, Autodesk VRED 2021 showed the shape of things to come in automotive design.

The demo below displays the power of ray tracing and AI to deliver realistic 3D visualizations in real time using RTX technology, snagging nearly a quarter million views. (Note: There’s no audio on this one, just amazing images.)

#8 A Test Drive in the Latest Mercedes

Just for fun — yes, even 2020 included fun — we look back at NVIDIA CEO Jensen Huang taking a spin in the latest Mercedes-Benz S-Class as part of the world premiere of the flagship sedan. He shared the honors with Grammy award-winning Alicia Keys and Formula One champ Lewis Hamilton.

The S-Class uses AI to deliver intelligent features like a voice assistant personalized for each driver. An engineer and a car enthusiast at heart, Huang gave kudos to the work of hundreds of engineers who delivered a vehicle that with over-the-air software updates will get better and better.

#7 Playing Marbles After Dark

The NVIDIA Omniverse team pointed the way to a future of photorealistic games and simulations rendered in real time. They showed how a distributed team of engineers and artists can integrate multiple tools to play more than a million polygons smoothly with ray-traced lighting at 1440p on a single GeForce RTX 3090.

The mesmerizing video captured the eyeballs of nearly half a million viewers.

#6 An AI Platform for the Rest of Us

Great things sometimes come in small packages. In October, we debuted the DGX Station A100, a supercomputer that plugs into a standard wall socket to let data scientists do world-class work in AI. More than 400,000 folks tuned in.

#5 Seeing Virtual Meetings Through a New AI

With online gatherings the new norm, NVIDIA Maxine attracted a lot of eyeballs. More than 800,000 viewers tuned into this demo of how we’re using generative adversarial networks to lower the bandwidth and turn up the quality of video conferencing.

#4 What’s Jensen Been Cooking?

Our most energy-efficient video of 2020 was a bit of a tease. It lasted less than 30 seconds, but Jensen Huang’s preview of the first NVIDIA Ampere architecture GPU drew nearly a million viewers.

#3 Voila, Jensen Whips Up the First Kitchen Keynote

In the days of the Great Depression, vacuum tubes flickered with fireside chats. The 2020 pandemic spawned a slew of digital events with GTC among the first of them.

In May, Jensen recorded in his California home the first kitchen keynote. In a playlist of nine virtual courses, he served a smorgasbord where the NVIDIA A100 GPU was an entrée surrounded by software side dishes that included frameworks for conversational AI (Jarvis) and recommendation systems (Merlin). The first chapter alone attracted more than 300,000 views.

And we did it all again in October when we featured the first DPU, its DOCA software and a framework to accelerate drug discovery.

#2 Delivering Enterprise AI in a Box

The DGX A100 emerged as one of the favorite dishes from our May kitchen keynote. The 5-petaflops system packs AI training, inference and analytics for any data center.

Some 1.3 million viewers clicked to get a virtual tour of the eight A100 GPUs and 200 Gbit/second InfiniBand links inside it.

#1 Enough of All This Hard Work, Let’s Have Fun!

By September it was high time to break away from a porcupine of a year. With the GeForce RTX 30 Series GPUs, we rolled out engines to create lush new worlds for those whose go-to escape is gaming.

The launch video, viewed more than 1.5 million times, begins with a brief tour of the history of computer games. Good days remembered, good days to come.

For Dessert: Two Bytes of Chocolate

We’ll end 2020, happily, with two special mentions.

Our most watched video of the year was a blistering five-minute clip of game play on DOOM Eternal running all out on a GeForce RTX 3080 in 4K.

And perhaps our sweetest feel good moment of 2020 was delivered by an NVIDIA engineer, Bryce Denney, who hacked a way to let choirs sing together safely in the pandemic. Play it again, Bryce!

 

The post Sparkles in the Rough: NVIDIA’s Video Gems from a Hardscrabble 2020 appeared first on The Official NVIDIA Blog.

Read More

Inception to the Rule: AI Startups Thrive Amid Tough 2020

Inception to the Rule: AI Startups Thrive Amid Tough 2020

2020 served up a global pandemic that roiled the economy. Yet the startup ecosystem has managed to thrive and even flourish amid the tumult. That may be no coincidence.

Crisis breeds opportunity. And nowhere has that been more prevalent than with startups using AI, machine learning and data science to address a worldwide medical emergency and the upending of typical workplace practices.

This is also reflected in NVIDIA Inception, our program to nurture startups transforming industries with AI and data science. Here are a few highlights from a tremendous year for the program and the members it’s designed to propel toward growth and success.

Increased membership:

  • Inception hit a record 7,000 members — that’s up 25 percent on the year.
  • IT services, healthcare, and media and entertainment were the top three segments, reflecting the global pandemic’s impact on remote work, medicine and home-based entertainment.
  • Early-stage and seed-stage startups continue to lead the rate of joining NVIDIA Inception. This has been a consistent trend over recent years.

Startups ramp up: 

  • 100+ Inception startups reached the program’s Premier level, which unlocks increased marketing support, engineering access and exposure to senior customer contacts.
  • Developers from Inception startups enrolled in more than 2,000 sessions with the NVIDIA Deep Learning Institute, which offers hands-on training and workshops.
  • GPU Ventures, the venture capital arm of NVIDIA Inception, made investments in three startup companies — Plotly, Artisight and Rescale.

Deepening partnerships: 

  • NVIDIA Inception added Oracle’s Oracle for Startups program to its list of accelerator partners, which already includes AWS Activate and Microsoft for Startups, as well as a variety of regional programs. These tie-ups open the door for startups to access free cloud credits, new marketing channels, expanded customer networks, and other benefits across programs.
  • The NVIDIA Inception Alliance for Healthcare launched earlier this month, starting with healthcare leaders GE Healthcare and Nuance, to provide a clear go-to-market path for medical imaging startups.

At its core, NVIDIA Inception is about forging connections for prime AI startups, finding new paths for them to pursue success, and providing them with the tools or resources to take their business to the next level.

Read more about NVIDIA Inception partners on our blog and learn more about the program at https://www.nvidia.com/en-us/deep-learning-ai/startups/.

The post Inception to the Rule: AI Startups Thrive Amid Tough 2020 appeared first on The Official NVIDIA Blog.

Read More

Shifting Paradigms, Not Gears: How the Auto Industry Will Solve the Robotaxi Problem

Shifting Paradigms, Not Gears: How the Auto Industry Will Solve the Robotaxi Problem

A giant toaster with windows. That’s the image for many when they hear the term “robotaxi.” But there’s much more to these futuristic, driverless vehicles than meets the eye. They could be, in fact, the next generation of transportation.

Automakers, suppliers and startups have been dedicated to developing fully autonomous vehicles for the past decade, though none has yet to deploy a self-driving fleet at scale.

The process is taking longer than anticipated because creating and deploying robotaxis aren’t the same as pushing out next year’s new car model. Instead, they’re complex supercomputers on wheels with no human supervision, requiring a unique end-to-end process to develop, roll out and continually enhance.

The difference between these two types of vehicles is staggering. The amount of sensor data a robotaxi needs to process is 100 times greater than today’s most advanced vehicles. The complexity in software also increases exponentially, with an array of redundant and diverse deep neural networks (DNNs) running simultaneously as part of an integrated software stack.

These autonomous vehicles also must be constantly upgradeable to take advantage of the latest advances in AI algorithms. Traditional cars are at their highest level of capability at the point of sale. With yearslong product development processes and a closed architecture, these vehicles can’t take advantage of features that come about after they leave the factory.

Vehicles That Get Better and Better Over Time

With an open, software-defined architecture, robotaxis will be at their most basic capability when they first hit the road. Powered by DNNs that are continuously improved and updated in the vehicle, self-driving cars will constantly be at the cutting edge.

These new capabilities all require high-performance, centralized compute. Achieving this paradigm shift in personal transportation requires reworking the entire development pipeline from end to end, with a unified architecture from training, to validation, to real-time processing.

NVIDIA is the only company that enables this end-to-end development, which is why virtually every robotaxi maker and supplier — from Zoox and Voyage in the U.S., to DiDi Chuxing in China, to Yandex in Russia — is using its GPU-powered offerings.

Installing New Infrastructure

Current advanced driver assistance systems are built on features that have become more capable over time, but don’t necessarily rely on AI. Autonomous vehicles, however, are born out of the data center. To operate in thousands of conditions around the world requires intensive DNN training using mountains of data. And that data grows exponentially as the number of AVs on the road increases.

To put that in perspective, a fleet of just 50 vehicles driving six hours a day generates about 1.6 petabytes of sensor data daily. If all that data were stored on standard 1GB flash drives, they’d cover more than 100 football fields. This data must then be curated and labeled to train the DNNs that will run in the car, performing a variety of dedicated functions, such as object detection and localization.

NVIDIA DRIVE infrastructure provides the unified architecture needed to train self-driving DNNs on massive amounts of data.

This data center infrastructure isn’t also used to test and validate DNNs before vehicles operate on public roads. The NVIDIA DRIVE Sim software and NVIDIA DRIVE Constellation autonomous vehicle simulator deliver a scalable, comprehensive and diverse testing environment. DRIVE Sim is an open platform with plug-ins for third-party models from ecosystem partners, allowing users to customize it for their unique use cases.

NVIDIA DRIVE Constellation and NVIDIA DRIVE Sim deliver a virtual proving ground for autonomous vehicles.

This entire development infrastructure is critical to deploying robotaxis at scale and is only possible through the unified, open and high-performance compute delivered by GPU technology.

Re-Thinking the Wheel

The same processing capabilities required to train, test and validate robotaxis are just as necessary in the vehicle itself.

A centralized AI compute architecture makes it possible to run the redundant and diverse DNNs needed to replace the human driver all at once. This architecture must also be open to take advantage of new features and DNNs.

The DRIVE family is built on a single scalable architecture ranging from one NVIDIA Orin variant that sips just five watts of energy and delivers 10 TOPS of performance all the way up to the new DRIVE AGX Pegasus, featuring the next-generation Orin SoC and NVIDIA Ampere architecture for thousands of operations per second.

With a single scalable architecture, robotaxi makers have the flexibility to develop new types of vehicles on NVIDIA DRIVE AGX.

Such a high level of performance is necessary to replace and perform better than a human driver. Additionally, the open and modular nature of the platform enables robotaxi companies to create custom configurations to accommodate the new designs opened up by removing the human driver (along with steering wheel and pedals).

With the ability to use as many processors as needed to analyze data from the dozens of onboard sensors, developers can ensure safety through diversity and redundancy of systems and algorithms.

This level of performance has taken years of investment and expertise to achieve. And, by using a single scalable architecture, companies can easily transition to the latest platforms without sacrificing valuable software development time.

Continuous Improvement

By combining data center and in-vehicle solutions, robotaxi companies can create a continuous, end-to-end development cycle for constant improvement.

As DNNs undergo improvement and learn new capabilities in the data center, the validated algorithms can be delivered to the car’s compute platform over the air for a vehicle that is forever featuring the latest and greatest technology.

This continuous development cycle extends joy to riders and opens new, transformative business models to the companies building this technology.

The post Shifting Paradigms, Not Gears: How the Auto Industry Will Solve the Robotaxi Problem appeared first on The Official NVIDIA Blog.

Read More