From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry

From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry

There are major shifts happening in the media and entertainment industry.

With the rise of streaming services, there’s a growing demand for high-quality programming  and an increasing need for fresh content to satisfy hundreds of millions of subscribers.

At the same time, teams are often collaborating on complex assets using multiple applications while working from different geographic locations. New pipelines are emerging and post-production workflows are being integrated earlier into processes, boosting the need for real-time collaboration.

By extending our Omniverse 3D simulation and collaboration platform to run on the NVIDIA EGX AI platform, NVIDIA is making it even easier for artists, designers, technologists and other creative professionals to accelerate workflows for productions — from asset creation to live on-set collaboration.

The EGX platform leverages the power of NVIDIA RTX GPUs, NVIDIA Virtual Data Center Workstation software, and NVIDIA Omniverse to fundamentally transform the collaborative process during digital content creation and virtual production.

Professionals and studios around the world can use this combination to lower costs, boost creativity across applications and teams, and accelerate production workflows.

Driving Real-Time Collaboration, Increased Interactivity

The NVIDIA EGX platform delivers the power of the NVIDIA Ampere architecture on a range of validated servers and devices. A vast ecosystem of partners offer EGX through their products and services. Professional creatives can use these to achieve the most significant advancements in computer graphics to accelerate their film and television content creation pipelines.

To support third-party digital content creation applications, Omniverse Connect libraries are distributed as plugins that enable client applications to connect to Omniverse Nucleus and to publish and subscribe to individual assets and full worlds. Supported applications for common film and TV content creation pipelines include Epic Games Unreal Engine, Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Adobe Photoshop, Substance Painter by Adobe, and Unity.

NVIDIA Virtual Workstation software provides the most powerful virtual workstations from the data center or cloud to any device, anywhere. IT departments can virtualize any application from the data center with a native workstation user experience, while eliminating constrained workflows and flexibly scaling GPU resources.

Studios can optimize their infrastructure by efficiently centralizing applications and data. This dramatically reduces IT operating expenses and allows companies to focus IT resources on managing strategic projects instead of individual workstations — all while enabling a more flexible, remote real-time environment with stronger data security.

With NVIDIA Omniverse, creative teams have the ability to deliver real-time results by creating, iterating and collaborating on the same assets while using a variety of applications. Omniverse powered by the EGX platform and NVIDIA Virtual Workstation allows artists to focus on creating high-quality content without waiting for long render times.

“Real-time ray tracing massive datasets in a remote workstation environment is finally possible with the new RTX A6000, HP ZCentral and NVIDIA’s Omniverse,” said Chris Eckardt, creative director and CG supervisor at Framestore.

Elevating Content Creation Across the World

During content creation, artists need to design and iterate quickly on assets, while collaborating with remote teams and other studios working on the same productions. With Omniverse running on the NVIDIA EGX platform, users can access the power of a high-end virtual workstation to rapidly create, iterate and present compelling renders using their preferred application.

Creative professionals can quickly combine terrain from one shot with characters from another without removing any data, which drives more efficient collaboration. Teams can communicate their designs more effectively by sharing high-fidelity ray-traced models with one click, so colleagues or clients can view the assets on a phone, tablet or in a browser. Along with the ability to mark up models in Omniverse, this accelerates the decision-making process and reduces design review cycles to help keep projects on track.

Taking Virtual Productions to the Next Level

With more film and TV projects using new virtual production techniques, studios are under immense pressure to iterate as quickly as possible to keep the cameras rolling. With in-camera VFX, the concept of fixing it in post-production has moved to fixing it all on set.

With the NVIDIA EGX platform and NVIDIA Virtual Workstations running Omniverse, users gain access to secure, up-to-date datasets from any device, ensuring they maintain productivity when working live on set.

Artists achieve a smooth experience with Unreal Engine, Maya, Substance Painter and other apps to quickly create and iterate on scene files while the interoperability of these software tools in Omniverse improves collaboration. Teams can instantly view photorealistic renderings of their model with the RTX Renderer so that they rapidly assess options for the most compelling images.

Learn more at https://developer.nvidia.com/nvidia-omniverse-platform.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry appeared first on The Official NVIDIA Blog.

Read More

AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC

AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC

Ajit Pai recalls a cold February day, standing in a field at the Wind River reservation in central Wyoming with Arapaho Indian leaders, hearing how they used a Connecting America grant to link schools and homes to gigabit fiber Internet.

It was one of many technology transformations the chairman of the U.S. Federal Communications Commission witnessed in visits to 49 states.

“Those trips redouble my motivation to do everything we can to close the digital divide because I want to make sure every American can participate in the digital economy,” said Pai in an online talk at NVIDIA’s GTC event.

Technologies like 5G and AI promise to keep that economy vibrant across factories, hospitals, warehouses and farm fields.

“I visited a corn farmer in Idaho who wants his combine to upload data to the cloud as it goes through the field to determine what water and pesticide to apply … AI will be transformative,” Pai said.

“AI is definitely the next industrial revolution, and America can help lead it,” said Soma Velayutham, NVIDIA’s general manager for AI in telecoms and 5G and host of the online talk with Pai.

AI a Fundamental Part of 5G

Shining a light on machine learning and 5G, the FCC has hosted forums on AI and open radio-access networks that included participants from AT&T, Dell, IBM, Hewlett Packard Enterprise, Nokia, NVIDIA, Oracle, Qualcomm and Verizon.

“It was striking to see how many people think AI will be a fundamental part of 5G, making it a much smarter network with optimizations using powerful AI algorithms to look at spectrum allocations, consumer use cases and how networks can address them,” Pai said.

For example, devices can use machine learning to avoid interference and optimize use of unlicensed spectrum the FCC is opening up for Wi-Fi at 6 GHz. “Someone could hire a million people to work that out, but it’s much more powerful to use AI,” he said.

“AI is really good at resource optimization,” said Velayutham. “AI can efficiently manage 5G network resources, optimizing the way we use and monetize spectrum,” he added.

AI Saves Spectrum, 5G Delivers Cool Services

Telecom researchers in Asia, Europe and the U.S. are using NVIDIA technologies to build software-defined radio access networks that can modulate more services into less spectrum, enabling new graphics and AI services.

In the U.K. telecom provider BT is working with an NVIDIA partner on edge computing applications such as streaming over 5G coverage of sporting events with CloudXR, a mix of virtual and augmented reality.

In closing, Pai addressed developers in the GTC audience, thanking them and “all the innovators for doing this work. You have a friend at the FCC who recognizes your innovation and wants to be a partner with it,” he said.

To hear more about how AI will transform industries at the edge of the network, watch a portion of the GTC keynote below by NVIDIA’s  CEO, Jensen Huang.

The post AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC appeared first on The Official NVIDIA Blog.

Read More

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

Pindar Van Arman is a veritable triple threat — he can paint, he can program and he can program robots that paint.

Van Arman first started incorporating robots into his artistic method 15 years ago to save time. He coded a robot to paint the beginning stages of an art piece — like ”a printer that can pick up a brush” — to save time.

It wasn’t until Van Arman took part in the DARPA Grand Challenge, a prize competition for autonomous vehicles, that he was inspired to bring AI into his art.

Now, his robots are capable of creating artwork all on their own through the use of deep neural networks and feedback loops. Van Arman is never far away, though, sometimes pausing a robot to adjust its code and provide it some artistic guidance.

Van Arman’s work is on display in the AI Art Gallery at GTC 2020, and he’ll be giving conference attendees a virtual tour of his studio on Oct. 8 at 11 a.m. Pacific time.

Key Points From This Episode:

  • One of Van Arman’s most recent projects is artonomous, an artificially intelligent painting robot that is learning the subtleties of fine art. Anyone can submit their photo to be included in artonomous’ training set.
  • Van Arman predicts that AI will become even more creative, independent of its human creator. He predicts that AI artists will learn to program a variety of coexisting networks that give AI a greater understanding of what defines art.

Tweetables:

“I’m trying to understand myself better by exploring my own creativity — by trying to capture it in code, breaking it down and distilling it” — Pindar Van Arman [4:22]

“I’d say 50% of the paintings are completely autonomous, and 50% of the paintings are directed by me. 100% of them, though, are my art” — Pindar Van Arman [17:20]

You Might Also Like

How Tattoodo Uses AI to Help You Find Your Next Tattoo

Picture this, you find yourself in a tattoo parlor. But none of the dragons, flaming skulls, or gothic font lifestyle mottos you see on the wall seem like something you want on your body. So what do you do? You turn to AI, of course. We spoke to two members of the development team at Tattoodo.com, who created an app that uses deep learning to help you create the tattoo of your dreams.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Robots can do amazing things. Compare even the most advanced robots to a three year old, however, and they can come up short. UC Berkeley Professor Pieter Abbeel has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally.

How AI’s Storming the Fashion Industry

Costa Colbert — who holds degrees ranging from neural science to electrical engineering — is working at MAD Street Den to bring machine learning to fashion. He’ll explain how his team is using generative adversarial networks to create images of models wearing clothes.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020 appeared first on The Official NVIDIA Blog.

Read More

Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries

Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries

Hate hunting and pecking away at your keyboard every time you have a quick question? You’ll love this.

Microsoft’s Bing search engine has turned to Turing-NLG and NVIDIA GPUs to suggest full sentences for you as you type.

Turing-NLG is a cutting-edge, large-scale unsupervised language model that has achieved strong performance on language modeling benchmarks.

It’s just the latest example of an AI technique called unsupervised learning, which makes sense of vast quantities of data by extracting features and patterns without the need for humans to provide any pre-labeled data.

Microsoft calls this Next Phrase Prediction, and it can feel like magic, making full-phrase suggestions in real time for long search queries.

Turing-NLG is among several innovations — from model compression to state caching and hardware acceleration — that Bing has harnessed with Next Phrase Prediction.

Over the summer, Microsoft worked with engineers at NVIDIA to optimize Turing-NLG to their needs, accelerating the model on NVIDIA GPUs to power the feature for users worldwide.

A key part of this optimization was to run this massive AI model extremely fast to power real-time search experience. With a combination of hardware and model optimization Microsoft and NVIDIA achieved an average latency below 10 milliseconds.

By contrast, it takes more than 100 milliseconds to blink your eye.

Learn more about the next wave of AI innovations at Bing.

Before the introduction of Next Phrase Prediction, the approach for handling query suggestions for longer queries was limited to completing the current word being typed by the user.

Now type in “The best way to replace,” and you’ll immediately see three suggestions for completing the phrase: wood, plastic and metal. Type in “how can I replace a battery for,” and you’ll see “iphone, samsung, ipad and kindle” all suggested.

With Next Phrase Prediction, Bing can now present users with full-phrase suggestions.

The more characters you type, the closer Bing gets to what you probably want to ask.

And because these suggestions are generated instantly, they’re not limited to previously seen data or just the current word being typed.

So, for some queries, Bing won’t just save you a few keystrokes — but multiple words.

As a result of this work, the coverage of autosuggestion completions increases considerably, Microsoft reports, improving the overall user experience “significantly.”

The post Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries appeared first on The Official NVIDIA Blog.

Read More

Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse

Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse

For researchers like Max Zimmerman, it was a welcome pile-on to tackle a global pandemic.

A million citizen scientists donated time on their home systems so the Folding@home consortium could calculate the intricate movements of proteins inside the coronavirus. Then a team of NVIDIA simulation experts combined the best tools from multiple industries to let the researchers see their data in a whole new way.

“I’ve been repeatedly amazed with the unprecedented scale of scientific collaborations,” said Zimmerman, a postdoc fellow at the Washington University School of Medicine in St. Louis, which hosts one of eight labs that keep the Folding@home research network humming.

As a result, Zimmerman and colleagues published a paper on BioRxiv, showing images of 17 weak spots in coronavirus proteins that antiviral drug makers can attack. And the high-res simulation of the work continues to educate researchers and the public alike about the bad actor behind the pandemic.

“We are in a position to make serious headway towards understanding the molecular foundations of health and disease,” he added.

An Antiviral Effort Goes Viral

In mid-March, the Folding@home team put many long-running projects on hold to focus on studying key proteins behind COVID. They issued a call for help, and by the end of the month the network swelled to become the world’s first exascale supercomputer, fueled in part by more than 280,000 NVIDIA GPUs.

Researchers harnessed that power to search for vulnerable moments in the rapid and intricate dance of the folding proteins, split-second openings drug makers could exploit. Within three months, computers found many promising motions that traditional experiments could not see.

“We’ve simulated nearly the entire proteome of the virus and discovered more than 50 new and novel targets to aid in the design of antivirals. We have also been simulating drug candidates in known targets, screening over 50,000 compounds to identify 300 drug candidates,” Zimmerman said.

The coronavirus uses cunning techniques to avoid human immune responses, like the Spike protein keeping its head down in a closed position. With the power of an exaflop at their disposal, researchers simulated the proteins folding for a full tenth of a second, orders of magnitude longer than prior work.

Though the time sampled was relatively short, the dataset to enable it was vast.

The SARS-CoV-2 spike protein alone consists of 442,881 atoms in constant motion. In just 1.2 microseconds, it generates about 300 billion timestamps, freeze frames that researchers must track.

Combined with the two dozen other coronavirus proteins they studied, Folding@home amassed the largest collection of molecular simulations in history.

Omniverse Simulates a Coronavirus Close Up

The dataset “ended up on my desk when someone asked what we could do with it using more than the typical scientific tools to really make it shine,” said Peter Messmer, who leads a scientific visualization team at NVIDIA.

Using Visual Molecular Dynamics, a standard tool for scientists, he pulled the data into NVIDIA Omniverse, a platform built for collaborative 3D graphics and simulation soon to be in open beta. Then the magic happened.

The team connected Autodesk’s Maya animation software to Omniverse to visualize a camera path, creating a fly-through of the proteins’ geometric intricacies. The platform’s core technologies such as NVIDIA Material Definition Language (MDL) let the team give tangible surface properties to molecules, creating translucent or glowing regions to help viewers see critical features more clearly.

With Omniverse, “researchers are not confined to scientific visualization tools, they can use the same tools the best artists and movie makers use to deliver a cinematic rendering — we’re bringing these two worlds together,” Messmer said.

Simulation Experts Share Their Story Live

The result was a visually stunning presentation where each spike on a coronavirus protein is represented with more than 1.8 million triangles, rendered by a bank of NVIDIA RTX GPUs.

Zimmerman and Messmer will co-host a live Q&A technical session Oct. 8 at 11 AM PDT to discuss how they developed the simulation that packs nearly 150 million triangles to represent a millisecond in a protein’s life.

The work validates the mission behind Omniverse to create a universal virtual environment that spans industries and disciplines. We’re especially proud to see the platform serve science in the fight against the pandemic.

The experience made Zimmerman “incredibly optimistic about the future of science. NVIDIA GPUs have been instrumental in generating our datasets, and now those GPUs running Omniverse are helping us see our work in a new and vivid way,” he said.

Visit NVIDIA’s COVID-19 Research Hub to learn more about how AI and GPU-accelerated technology continues to fight the pandemic. And watch NVIDIA CEO Jensen Huang describe in a portion of his GTC keynote below how Omniverse is playing a role.

The post Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse

Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse

The line between the physical and virtual worlds is blurring as autonomous vehicle simulation sharpens with NVIDIA Omniverse, our photorealistic 3D simulation and collaboration platform.

During the GPU Technology Conference keynote, NVIDIA founder and CEO Jensen Huang showcased for the first time NVIDIA DRIVE Sim running on NVIDIA Omniverse. DRIVE Sim leverages the cutting-edge capabilities of the platform for end-to-end, physically accurate autonomous vehicle simulation.

Omniverse was architected from the ground up to support multi-GPU, large-scale, multisensor simulation for autonomous machines. It enables ray-traced, physically accurate, real-time sensor simulation with NVIDIA RTX technology.

The video shows a digital twin of a Mercedes-Benz EQS driving a 17-mile route around a recreated version of the NVIDIA campus in Santa Clara, Calif. It includes Highways 101 and 87 and Interstate 280, with traffic lights, on-ramps, off-ramps and merges as well as changes to the time of day, weather and traffic.

To achieve the real-world replica of the testing loop, the real environment was scanned at 5-cm accuracy and recreated in simulation. The hardware, software, sensors, car displays and human-machine interaction were all implemented in simulation in the exact same way as the real world, enabling bit- and timing-accurate simulation.

Physically Accurate Sensor Simulation 

Autonomous vehicle simulation requires accurate physics and light modeling. This is especially critical for simulating sensors, which requires modeling rays beyond the visible spectrum and accurate timing between the sensor scan and environment changes.

Ray tracing is perfectly suited for this, providing realistic lighting by simulating the physical properties of light. And the Omniverse RTX renderer coupled with NVIDIA RTX GPUs enables ray tracing at real-time frame rates.

The capability to simulate light in real time has significant benefits for autonomous vehicle simulation. In the video, the vehicles show complex reflections of objects in the scene — including those not directly in the frame, just as it would in the real world. This also applies to other reflective surfaces such as wet roadways, reflective signs and buildings.

The Mercedes EQS shows the complexity of reflections enabled with ray tracing, including reflections of objects that are in the scene, but not in the frame.

RTX also enables high-fidelity shadows. Typically in virtual environments, shadows are pre-computed or pre-baked. However, to provide a dynamic environment for simulation, pre-baking isn’t possible. RTX enables high-fidelity shadows to be computed at run-time. In the night parking example from the video, the shadows from the lights are rendered directly instead of being pre-baked. This leads to shadows that appear softer and are much more accurate.

Nighttime parking scenarios show the benefit of ray tracing for complex shadows generated by dynamic light sources.

Universal Scene Description

DRIVE Sim is based on Universal Scene Description, an open framework developed by Pixar to build and collaborate on 3D content for virtual worlds.

USD provides a high level of abstraction to describe scenes in DRIVE Sim. For instance, USD makes it easy to define the state of the vehicle (position, velocity, acceleration) and trigger changes based on its proximity to other entities such as a landmark in the scene.

Also, the framework comes with a rich toolset and is supported by most major content creation tools.

Scalability and Repeatability

Most applications for generating virtual environments are targeted to systems with one to two GPUs, such as PC games. While the timing and latency of such architectures may be good enough for consumer games, designing a repeatable simulator for autonomous vehicles requires a much higher level of precision and performance.

Omniverse enables DRIVE Sim to simultaneously simulate multiple cameras, radars and lidars in real time, supporting sensor configurations from Level 2 assisted driving to Level 4 and Level 5 fully autonomous driving.

Together, these new capabilities brought to life by Omniverse deliver a simulation experience that is virtually indistinguishable from reality.

Watch NVIDIA CEO Jensen Huang recap all the news from GTC: 

 

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics

Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics

On land, underwater, in the air — even underground and on other planets — new autonomous machines and the applications that run on them are emerging daily.

Robots are working on construction sites to improve safety, they’re on factory floors to enhance logistics and they’re roaming farm rows to pick weeds and harvest crops.

As AI-powered autonomous machines proliferate, a new generation of students and developers will play a critical role in teaching and training these robots how to behave in the real world.

To help people get started, we’ve announced the availability of free online training and AI-certification programs. Aptly timed with World Teacher’s Day, these resources open up the immense potential of AI and robotics teaching and learning.

And there’s no better way to get hands-on learning and experience than with the new Jetson Nano 2GB Developer Kit, priced at just $59. NVIDIA CEO Jensen Huang announced this ultimate starter AI computer during the GPU Technology Conference on Monday. Incredibly affordable, the Jetson Nano 2GB helps make AI accessible to everyone.

New AI Certification Programs for Teachers and Students

NVIDIA offers two AI certification tracks to educators, students and engineers looking to reskill. Both are part of the NVIDIA Deep Learning Institute:

  • NVIDIA Jetson AI Specialist: This certification can be completed by anyone and recognizes competency in Jetson and AI using a hands-on, project-based assessment. This track is meant for engineers looking to reskill and advanced learners to build on their knowledge.
  • NVIDIA Jetson AI Ambassador: This certification is for educators and leaders at robotics institutions. It recognizes competency in teaching AI on Jetson using a project-based assessment and an interview with the NVIDIA team. This track is ideal for educators or instructors to get fully prepared to teach AI to students.

Additionally, the Duckietown Foundation is offering a free edX course on AI and robotics based on the new NVIDIA Jetson Nano 2GB Developer Kit.

“NVIDIA’s Jetson AI certification materials thoroughly cover the fundamentals with the added advantage of hands-on project-based learning,” said Jack Silberman, Ph.D., lecturer at UC San Diego, Jacobs School of Engineering, Contextual Robotics Institute. “I believe these benefits provide a great foundation for students to prepare for university robotics courses and compete in robotics competitions.”

“We know how important it is to provide all students with opportunities to impact the future of technology,” added Christine Nguyen, STEM curriculum director at the Boys & Girls Club of Western Pennsylvania. “We’re excited to utilize the NVIDIA Jetson AI Specialist certification materials with our students as they work towards being leaders in the fields of AI and robotics.”

“Acquiring new technical skills with a hands-on approach to AI learning becomes critical as AIoT drives the demand for interconnected devices and increasingly complex industrial applications,” said Matthew Tarascio, vice president of Artificial Intelligence at Lockheed Martin. “We’ve used the NVIDIA Jetson platform as part of our ongoing efforts to train and prepare our global workforce for the AI revolution.”

By making it easy to “teach the teachers” with hands-on AI learning and experimentation, Jetson is enabling a new generation to build a smarter, safer AI-enabled future.

Watch NVIDIA CEO Jensen Huang recap autonomous machines news at GTC:

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics appeared first on The Official NVIDIA Blog.

Read More

Hands-On AI: Duckietown Foundation Offering Free edX Robotics Course Powered by NVIDIA Jetson Nano 2GB

Hands-On AI: Duckietown Foundation Offering Free edX Robotics Course Powered by NVIDIA Jetson Nano 2GB

For many, the portal into AI is robotics. And one of the best ways to get good at robotics is to get hands on.

Roll up your sleeves, because this week at NVIDIA’s GPU Technology Conference, the Duckietown Foundation announced that it’s offering a free edX course on AI and robotics using the Duckiebot hardware platform powered by the new NVIDIA Jetson Nano 2GB Developer Kit.

The Duckietown project, which started as an MIT class in 2016, has evolved into an open-source platform for robotics and AI education, research and outreach. The project is coordinated by the Duckietown Foundation, whose mission is to reach and teach a wide audience of students about robotics and AI.

It does this through hands-on learning activities in which students put AI and robotics components together to address modern autonomy challenges for self-driving cars. Solutions are implemented in the Duckietown robotics ecosystem, where the interplay among theory, algorithms and deployment on real robots is witnessed firsthand in a model urban environment.

NVIDIA Jetson Nano 2GB Developer KitThe Jetson Nano 2GB Developer Kit has the performance and capability to run a diverse set of AI models and frameworks. This makes it the ultimate AI starter computer for learning and creating AI applications.

The new devkit is the latest offering in the NVIDIA Jetson AI at the Edge platform, which ranges from entry-level AI devices to advanced platforms for fully autonomous machines. To help people get started with robotics, NVIDIA also announced the availability of free online training and AI-certification programs.

“The Duckietown educational platform provides a hands-on, scaled down, accessible version of real world autonomous systems,” said Emilio Frazzoli, professor of Dynamic Systems and Control at ETH Zurich and advisor for the Duckietown Foundation. “Integrating NVIDIA’s Jetson Nano power in Duckietown enables unprecedented, affordable access to state-of-the-art compute solutions for learning autonomy.”

Another highlight of the course is the Duckietown Autolab remote infrastructure, which enables remote evaluation of robotic agents elaborated by learners with Duckiebot robots at home, providing feedback on assignments. This lets the course provide a realistic development flow with real hardware evaluation.

Duckiebot powered by Jetson Nano 2GB
Duckiebot powered by Jetson Nano 2GB.

Enrollment is now open for the free edX course, called “Self-Driving Cars with Duckietown,” which starts in February. To find out more about the technical specifications of the new NVIDIA powered Duckiebot or to pre-order, check out the Duckietown’s Store.

The AI Driving Olympics

For more advanced students, or for people who just want to witness the fun, Duckietown has created the “AI Driving Olympics” (AI-DO) competition. It focuses on autonomous vehicles with the objective of evaluating the state of the art in embodied AI, by benchmarking novel machine learning approaches to autonomy in a set of fun challenges.

AI-DO is made up of a series of increasingly complex tasks — from simple lane-following to fleet management. For each, competitors use various resources, such as simulation, logs, code templates, baseline implementations, and standardized physical autonomous Duckiebots operating in Duckietown, a formally defined urban environment.

Submissions are evaluated in simulation on the cloud, physically in remote Duckietown Autolabs, and running on actual Duckiebots at the live finals competition.

Participants can participate remotely at any stage of the competition. They just need to send their source code packaged as a Docker image. Team members will then be able to use Duckiebot’s “Autolabs,” which are facilities that allow remote experimentation in reproducible settings.

The next AI-DO race will be at NeurIPS, Dec. 6-12.

Duckietown classes and labs are offered at 80+ universities, including ETH Zürich and Université de Montréal. Curriculum materials for undergraduate and graduate courses are available open source. This includes weekly lecture plans, open source software, and a modular, do-it-yourself hardware smart city environment with autonomous driving car kits.

Watch NVIDIA CEO Jensen Huang recap all the autonomous machines news announced at GTC:

The post Hands-On AI: Duckietown Foundation Offering Free edX Robotics Course Powered by NVIDIA Jetson Nano 2GB appeared first on The Official NVIDIA Blog.

Read More

Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC

Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC

The NVIDIA DRIVE ecosystem is going multidimensional.

During the NVIDIA GPU Technology Conference this week, autonomous trucking startup Locomation and simulation company Blackshark.ai announced technological developments powered by NVIDIA DRIVE.

Locomation, a Pittsburgh-based provider of autonomous trucking technology, said it would integrate NVIDIA DRIVE AGX Orin in the upcoming rollout of its platooning system on public roads in 2022.

Innovating in the virtual world, Blackshark.ai detailed its toolset to create buildings and landscape assets for simulation environments on NVIDIA DRIVE Sim.

Together, these announcements mark milestones in the path toward safer, more efficient autonomous transportation.

Shooting for the Platoon

Locomation recently announced its first commercial system, Autonomous Relay Convoy, which allows one driver to pilot a lead truck while a fully autonomous follower truck operates in tandem.

The ARC system will be deployed with Wilson Logistics, which will operate more than 1,000 Locomation-equipped trucks, powered by NVIDIA DRIVE AGX Orin, starting in 2022.

NVIDIA DRIVE AGX Orin is a highly advanced software-defined platform for autonomous vehicles.  The system features the new Orin system-on-a-chip, which delivers more than 200 trillion operations per second — nearly 7x the performance of NVIDIA’s previous-generation Xavier SoC.

In August, Locomation and Wilson Logistics successfully completed the first-ever on-road pilot program transporting commercial freight using ARC. Two Locomation trucks, hauling Wilson Logistics trailers and freight, were deployed on a 420-mile long route along I-84 between Portland, Ore., and Nampa, Idaho. This stretch of interstate has some of the most challenging road conditions for truck driving, with curvatures, inclines and wind gusts.

“We’re moving rapidly toward autonomous trucking commercialization, and NVIDIA DRIVE presents a solution for providing a robust, safety-forward platform for our team to work with,” said Çetin Meriçli, CEO and cofounder of Locomation.

Constructing a New Dimension

While Locomation is deploying autonomous vehicles in the real world, Blackshark.ai is making it easier to create building and landscape assets used to enhance the virtual world on a global scale.

The startup has developed a digital twin platform that uses AI and cloud computing to automatically transform satellite data, aerial images or map and sensor data into building, landscape and infrastructure assets that contribute to a semantic photorealistic 3D environment.

During the opening GTC keynote, NVIDIA founder and CEO Jensen Huang showcased the technology on NVIDIA DRIVE Sim. DRIVE Sim uses high-fidelity simulation to create a safe, scalable and cost-effective way to bring self-driving vehicles to our roads.

It taps into the computing horsepower of NVIDIA RTX GPUs to deliver a powerful, scalable, cloud-based computing platform. One that is capable of generating billions of qualified miles for autonomous vehicle testing.

In the demo video, Blackshark’s AI automatically generated the trees and buildings used to reconstruct the city of San Jose in simulation for an immersive, authentic environment.

These latest announcements from Locomation and Blackshark.ai demonstrate the breadth of the DRIVE ecosystem, spanning the real and virtual worlds to push autonomous innovation further.

Watch NVIDIA CEO Jensen Huang recap all the news from GTC. It’s not too late to get access to hundreds of live and on-demand talks — register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Locomation and Blackshark.ai Innovate in Real and Virtual Dimensions at GTC appeared first on The Official NVIDIA Blog.

Read More

Swede-sational: Linköping University to Build Country’s Fastest AI Supercomputer

Swede-sational: Linköping University to Build Country’s Fastest AI Supercomputer

The land famed for its midsummer festivities and everyone’s favorite flatpack furniture store is about to add another jewel to its crown.

Linköping University, home to 36,000 staff and students, has announced its plans to build Sweden’s fastest AI supercomputer, based on the NVIDIA DGX SuperPOD computing infrastructure.

Carrying the name of renowned Swedish scientist Jacob Berzelius — considered to be one of the founders of modern chemistry — the new BerzeLiUs supercomputer will deliver 300 petaflops of AI performance to power state-of-the-art AI research and deep learning models.

The effort is spearheaded by a 300 million Swedish Krona ($33.6 million) donation from the Knut and Alice Wallenberg Foundation to accelerate Swedish AI research across academia and industry. The foundation heads the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) network — the country’s largest private research initiative focused on AI innovation.

“I am extremely happy and proud that Linköping University will, through the National Supercomputer Centre, be host for this infrastructure”, says Jan-Ingvar Jönsson, vice-chancellor of Linköping University. “This gives us confidence that Sweden is not simply maintaining its international position, but also strengthening it.”

A Powerful New AI Resource

Hosting world-class supercomputers is nothing new for the team at Linköping University.

The Swedish National Supercomputer Center (NSC) already houses six traditional supercomputers on campus, with a combined total of 6 petaflops of performance. Included among these is Tetralith, which held the title of the most powerful supercomputer in the Nordics after its installation in 2018.

But with BerzeLiUs the team is making a huge leap.

“BerzeLiUs will be more than twice as fast as Tetralith,” confirmed Niclas Andersson, technical director at NSC. “This is a super-fast AI resource — the fastest computing cluster we have ever installed.”

The powerful new AI resource will boost collaboration between academia and leading Swedish industrial companies, primarily those financed by the Knut and Alice Wallenberg Foundation, such as the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program (WASP) as well as other life science and quantum technology initiatives.

Full Speed Ahead

Building a leading AI supercomputer usually can take years of planning and development. But by building BerzeLiUs with NVIDIA DGX SuperPOD technology, Linköping will be able to deploy the fully integrated system and start running complex AI models as the new year begins.

The system will be built and installed by Atos. Initially, the supercomputer will consist of 60 NVIDIA DGX A100 systems interconnected across an NVIDIA Mellanox InfiniBand fabric and 1.5 petabytes of high-performance storage from DDN. BerzeLiUs will also feature the Atos Codex AI Suite, enabling researchers to speed up processing times on their complex data.

“This new supercomputer will supercharge AI research in Sweden,” said Jaap Zuiderveld, vice president for EMEA at NVIDIA. “It will position Sweden as a leader in academic research, and it will give Swedish businesses a competitive edge in telecommunications, design, drug development, manufacturing and more industries.”

Join Linköping University at GTC

Dive deeper into the cutting-edge research performed at Linköping University. Join Anders Eklund, associate professor at Linköping University, and Joel Hedlund, data director at AIDA, to explore how AI is powering innovation in radiology and pathology imaging.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register for GTC now through Oct. 9 using promo code CMB4KN to get 20 percent off. Academics, students, government, and nonprofit attendees join free when registering with their organization’s email address.

The post Swede-sational: Linköping University to Build Country’s Fastest AI Supercomputer appeared first on The Official NVIDIA Blog.

Read More