AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence

AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence

The upcoming election isn’t the only thing on lawmakers’ minds. Several congressional representatives have been grappling with U.S. AI policy for years, and their work is getting closer to being made into law.

The issue is one of America’s greatest challenges and opportunities: What should the U.S. do to harness AI for the public good, to benefit citizens and companies, and to extend the nation’s prosperity?

At the GPU Technology Conference this week, a bipartisan panel of key members of Congress on AI joined Axios reporter Erica Pandey for our AI for America panel to explore their strategies. Representatives Robin Kelly of Illinois, Will Hurd of Texas and Jerry McNerney of California discussed the immense opportunities of AI, as well as challenges they see as policymakers.

The representatives’ varied backgrounds gave each a unique perspective. McNerney, who holds a Ph.D. in mathematics, considers AI from the standpoint of science and technology. Hurd was a CIA agent and views it through the lens of national security. Kelly is concerned about the impact of AI on the community, jobs and income.

All agreed that the federal government, private sector and academia must work together to ensure that the United States continues to lead in AI. They also agree that AI offers enormous benefits for American companies and citizens.

McNerny summed it up by saying: “AI will affect every part of American life.”

Educate the Public, Educate Ourselves

Each legislator recognized how AI will be a boon for everything from sustainable agriculture to improving the delivery of citizen services. But these will only become reality with support from the public and elected officials.

Kelly emphasized the importance of education — to overcome fear and give workers new skills. Noting that she didn’t have a technical background, she said she considers the challenge from a different perspective than developers.

“We have to educate people and we have to educate ourselves,” she said. “Each community will be affected differently by AI. Education will allay a lot of fears.”

All three agreed that the U.S. federal government, academia and the private sector must collaborate to create this cultural shift. “We need a massive investment in AI education,” said McNerney, who detailed some of the work done at the University of the Pacific to create AI curricula.

Hurd urged Congress to reevaluate and update existing educational programs, making it more flexible to develop programming and data science skills instead of focusing on a full degree. He said we need to “open up federal data for everyone to utilize and take advantage.”

The panel raised other important needs, such as bringing computer science classes into high schools across the country and training federal workers to build AI into their project planning.

Roadmap to a More Efficient Future

Many Americans may not be aware that AI is already a part of their daily lives. The representatives offered some examples, including how AI is being used to maximize crop yields by crunching data on soil characteristics, weather and water consumption.

Hurd and Kelly have been focused on AI policy for several years. Working with the Bipartisan Policy Center, they developed the National Strategy on AI, a policy framework that lays out a strategy for the U.S. to accelerate AI R&D and adoption.

They introduced a resolution, backed by a year of work and four white papers, that calls for investments to make GPUs and other computing resources available, strengthening international cooperation, increasing funding for R&D, building out workforce training programs, and developing AI in an ethical way that reduces bias and protects privacy.

Ned Finkle, vice president of government affairs at NVIDIA, voiced support for the resolution, noting that the requirements for AI are steep.

“AI requires staggering amounts of data, specialized training and massive computational resources,” he said. “With this resolution, Representatives Hurd and Kelly are presenting a solid framework for urgently needed investments in computing power, workforce training, AI curriculum development and data resources.”

McNerney is also working to spur AI development and adoption. His AI in Government Act, which would direct federal agencies to develop plans to adopt AI and evaluate resources available to academia, has passed the House of Representatives and is pending with the Senate.

As their legislation moves forward, the representatives encourage industry leaders to provide input and support their efforts. They urged those interested to visit their websites and reach out.

The post AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence appeared first on The Official NVIDIA Blog.

Read More

Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks

Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks

In a conversation that ranged from the automation of software to holodeck-style working environments, NVIDIA CEO and founder Jensen Huang explained why now is the moment to back a new generation of startups as part of this week’s GPU Technology Conference.

Jeff Herbst, vice president of business development and head of the NVIDIA Inception startup accelerator program, moderated the panel, which included CrowdAI CEO Devaki Raj and Babble Labs CEO Chris Rowen.

“AI is going to create a new set of opportunities, because all of a sudden software that wasn’t writable in the past, or we didn’t know how to write in the past, we now have the ability to write,” Huang said.

The conversation comes after Huang revealed on Monday that NVIDIA Inception, which nurtures startups revolutionizing industries with AI and data science, had grown to include more than 6,500 companies.

Another change, Huang envisioned workplaces transformed by automation, thanks to AI and robots of all kinds. When asked, by Rowen, about the future of NVIDIA’s own campus, Huang said NVIDIA’s building a real-life holodeck.

One day, these will allow employees from all over the world to work together. “People at home will be in VR, while people at the office will be on the holodeck,” Huang said.

Huang said he sees NVIDIA first building one. “Then I would like to imagine our facilities having 10 to 20 of these new holodecks,” he said.

More broadly, AI, Huang explained, will allow organizations of all kinds to turn their data, and their knowledge base, into powerful AI. NVIDIA will play a role as an enabler, giving companies the tools to transition to a new kind of computing.

He described AI as the “automation of automation” and “software writing software.” This gives the vast majority of the world’s population who aren’t coders new capabilities. “In a lot of ways, AI is the best way to democratize computer science,” Huang said.

For more from Huang, Herbst, Raj and Rowan, register for GTC and watch a replay of the conversation. The talk will be available for viewing by the general public in 30 days.

The post Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks appeared first on The Official NVIDIA Blog.

Read More

Why Retailers Are Investing in Edge Computing and AI

Why Retailers Are Investing in Edge Computing and AI

AI is a retailer’s automated helper, acting as a smart assistant to suggest the perfect position for products in stores, accurately predict consumer demand, automate order fulfillment in warehouses, and much more.

The technology can help retailers grow their top line, potentially improving net profit margins from 2 percent to 6 percent — and adding $1 trillion in profits to the industry globally — according to McKinsey Global Institute analysis.

It can also help them hold on to more of what they already have by reducing shrinkage — the loss of inventory due to theft, shoplifting, ticket switching at self-checkout lanes, etc. — which costs retailers $62 billion annually, according to the National Retail Federation.

For retailers, the ability to deploy, manage and scale AI across their entire distributed edge infrastructure using a single, unified platform is critical. Managing these many devices is no small feat for IT teams as the process can be time-consuming, expensive and complex.

NVIDIA is working with retailers, software providers and startups to create an ecosystem of AI applications for retail, such as intelligent stores, forecasting, conversational AI and recommendation systems, that help retailers pull real-time insights from their data to provide a better shopping experience for their customers.

Smart Retail Managed Remotely at the Edge

The NVIDIA EGX edge AI platform makes it easy to deploy and continuously update AI applications in distributed stores and warehouses. It combines GPU-accelerated edge servers, a cloud-native software stack and containerized applications in NVIDIA NGC, a software catalog that offers a range of industry-specific AI toolkits and pre-trained models.

To provide customers a unified control plane through which to manage their AI infrastructure, NVIDIA announced this week during the GPU Technology Conference a new hybrid-cloud platform called NVIDIA Fleet Command.

Fleet Command centralizes the management of servers spread across vast areas. It offers one-touch provisioning, over-the-air software updates, remote management and detailed monitoring dashboards to make it easier for operational teams to reduce the burden of IT to get the most out of their AI applications. Early access to Fleet Command is open now.

KION Group Pursues One-Touch Intelligent Warehouse Deployment

KION Group, a global supply chain solutions provider, is looking to use Fleet Command to securely deploy and update their applications through a unified control plane, from anywhere, at any time. They are using the NVIDIA EGX AI platform to develop AI applications for its intelligent warehouse systems, increasing the throughput and efficiency in its more than 6,000 retail distribution centers.

The following demo shows how Fleet Command helps KION Group simplify the deployment and management of AI at the edge — from material handling to autonomous forklifts to pick-and-place robotics.

Everseen Scales Asset Protection & Perpetual Inventory Accuracy with Edge AI

Everseen’s AI platform, deployed in many retail stores and distribution centers, uses advanced machine learning, computer vision and deep learning to bring real-time insights to retailers for asset protection and to streamline distribution system processes.

The platform is optimized on NVIDIA T4 Tensor Core GPUs using NVIDIA TensorRT software, resulting in 10x higher inference compute at the edge. This enables Everseen’s customers to reduce errors and shrinkage in real time for faster customer checkout and to optimize operations in distribution centers.

Everseen is using the EGX platform and Fleet Command to simplify and scale their deployment and to update their AI applications on servers across hundreds of retail stores and distribution centers. As AI algorithms retrain and improve accuracy with new metadata, updated applications can be securely updated and deployed over the air on hundreds of servers.

Deep North Delivers Transformative Insights with In-Store Analytics

Retailers use Deep North’s AI platform to digitize their shopping locations, analyze anonymous shopper behavior inside stores and conduct visual analytics. The platform gives retailers the ability to predict and adapt to consumer behavior in their commercial spaces and optimize store layout and staffing in high-traffic aisles

The company uses NVIDIA EGX to simplify AI deployment, server management and device orchestration. With EGX, AI computations are performed at the edge entirely in stores, delivering real-time notifications to store associates for better inventory management and optimized staffing.

By optimizing its intelligent video analytics applications on NVIDIA T4 GPUs with the NVIDIA Metropolis application framework, Deep North has seen orders-of-magnitude improvement in edge compute performance while delivering real-time insights to customers.

Growing AI Opportunities for Retailers

The NVIDIA EGX platform and Fleet Command deliver accelerated, secure AI computing to the edge for retailers today. And a growing number of them are applying GPU computing, AI, robotics and simulation technologies to reinvent their operations for maximum agility and profitability.

To learn more, check out my session on “Driving Agility in Retail with AI” at GTC. Explore how NVIDIA is leveraging AI in retail through GPU-accelerated containers, deep learning frameworks, software libraries and SDKs. And watch how NVIDIA AI is transforming everyday retail experiences:

Also watch NVIDIA CEO Jensen Huang recap all the news at GTC: 

The post Why Retailers Are Investing in Edge Computing and AI appeared first on The Official NVIDIA Blog.

Read More

From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry

From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry

There are major shifts happening in the media and entertainment industry.

With the rise of streaming services, there’s a growing demand for high-quality programming  and an increasing need for fresh content to satisfy hundreds of millions of subscribers.

At the same time, teams are often collaborating on complex assets using multiple applications while working from different geographic locations. New pipelines are emerging and post-production workflows are being integrated earlier into processes, boosting the need for real-time collaboration.

By extending our Omniverse 3D simulation and collaboration platform to run on the NVIDIA EGX AI platform, NVIDIA is making it even easier for artists, designers, technologists and other creative professionals to accelerate workflows for productions — from asset creation to live on-set collaboration.

The EGX platform leverages the power of NVIDIA RTX GPUs, NVIDIA Virtual Data Center Workstation software, and NVIDIA Omniverse to fundamentally transform the collaborative process during digital content creation and virtual production.

Professionals and studios around the world can use this combination to lower costs, boost creativity across applications and teams, and accelerate production workflows.

Driving Real-Time Collaboration, Increased Interactivity

The NVIDIA EGX platform delivers the power of the NVIDIA Ampere architecture on a range of validated servers and devices. A vast ecosystem of partners offer EGX through their products and services. Professional creatives can use these to achieve the most significant advancements in computer graphics to accelerate their film and television content creation pipelines.

To support third-party digital content creation applications, Omniverse Connect libraries are distributed as plugins that enable client applications to connect to Omniverse Nucleus and to publish and subscribe to individual assets and full worlds. Supported applications for common film and TV content creation pipelines include Epic Games Unreal Engine, Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Adobe Photoshop, Substance Painter by Adobe, and Unity.

NVIDIA Virtual Workstation software provides the most powerful virtual workstations from the data center or cloud to any device, anywhere. IT departments can virtualize any application from the data center with a native workstation user experience, while eliminating constrained workflows and flexibly scaling GPU resources.

Studios can optimize their infrastructure by efficiently centralizing applications and data. This dramatically reduces IT operating expenses and allows companies to focus IT resources on managing strategic projects instead of individual workstations — all while enabling a more flexible, remote real-time environment with stronger data security.

With NVIDIA Omniverse, creative teams have the ability to deliver real-time results by creating, iterating and collaborating on the same assets while using a variety of applications. Omniverse powered by the EGX platform and NVIDIA Virtual Workstation allows artists to focus on creating high-quality content without waiting for long render times.

“Real-time ray tracing massive datasets in a remote workstation environment is finally possible with the new RTX A6000, HP ZCentral and NVIDIA’s Omniverse,” said Chris Eckardt, creative director and CG supervisor at Framestore.

Elevating Content Creation Across the World

During content creation, artists need to design and iterate quickly on assets, while collaborating with remote teams and other studios working on the same productions. With Omniverse running on the NVIDIA EGX platform, users can access the power of a high-end virtual workstation to rapidly create, iterate and present compelling renders using their preferred application.

Creative professionals can quickly combine terrain from one shot with characters from another without removing any data, which drives more efficient collaboration. Teams can communicate their designs more effectively by sharing high-fidelity ray-traced models with one click, so colleagues or clients can view the assets on a phone, tablet or in a browser. Along with the ability to mark up models in Omniverse, this accelerates the decision-making process and reduces design review cycles to help keep projects on track.

Taking Virtual Productions to the Next Level

With more film and TV projects using new virtual production techniques, studios are under immense pressure to iterate as quickly as possible to keep the cameras rolling. With in-camera VFX, the concept of fixing it in post-production has moved to fixing it all on set.

With the NVIDIA EGX platform and NVIDIA Virtual Workstations running Omniverse, users gain access to secure, up-to-date datasets from any device, ensuring they maintain productivity when working live on set.

Artists achieve a smooth experience with Unreal Engine, Maya, Substance Painter and other apps to quickly create and iterate on scene files while the interoperability of these software tools in Omniverse improves collaboration. Teams can instantly view photorealistic renderings of their model with the RTX Renderer so that they rapidly assess options for the most compelling images.

Learn more at https://developer.nvidia.com/nvidia-omniverse-platform.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry appeared first on The Official NVIDIA Blog.

Read More

AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC

AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC

Ajit Pai recalls a cold February day, standing in a field at the Wind River reservation in central Wyoming with Arapaho Indian leaders, hearing how they used a Connecting America grant to link schools and homes to gigabit fiber Internet.

It was one of many technology transformations the chairman of the U.S. Federal Communications Commission witnessed in visits to 49 states.

“Those trips redouble my motivation to do everything we can to close the digital divide because I want to make sure every American can participate in the digital economy,” said Pai in an online talk at NVIDIA’s GTC event.

Technologies like 5G and AI promise to keep that economy vibrant across factories, hospitals, warehouses and farm fields.

“I visited a corn farmer in Idaho who wants his combine to upload data to the cloud as it goes through the field to determine what water and pesticide to apply … AI will be transformative,” Pai said.

“AI is definitely the next industrial revolution, and America can help lead it,” said Soma Velayutham, NVIDIA’s general manager for AI in telecoms and 5G and host of the online talk with Pai.

AI a Fundamental Part of 5G

Shining a light on machine learning and 5G, the FCC has hosted forums on AI and open radio-access networks that included participants from AT&T, Dell, IBM, Hewlett Packard Enterprise, Nokia, NVIDIA, Oracle, Qualcomm and Verizon.

“It was striking to see how many people think AI will be a fundamental part of 5G, making it a much smarter network with optimizations using powerful AI algorithms to look at spectrum allocations, consumer use cases and how networks can address them,” Pai said.

For example, devices can use machine learning to avoid interference and optimize use of unlicensed spectrum the FCC is opening up for Wi-Fi at 6 GHz. “Someone could hire a million people to work that out, but it’s much more powerful to use AI,” he said.

“AI is really good at resource optimization,” said Velayutham. “AI can efficiently manage 5G network resources, optimizing the way we use and monetize spectrum,” he added.

AI Saves Spectrum, 5G Delivers Cool Services

Telecom researchers in Asia, Europe and the U.S. are using NVIDIA technologies to build software-defined radio access networks that can modulate more services into less spectrum, enabling new graphics and AI services.

In the U.K. telecom provider BT is working with an NVIDIA partner on edge computing applications such as streaming over 5G coverage of sporting events with CloudXR, a mix of virtual and augmented reality.

In closing, Pai addressed developers in the GTC audience, thanking them and “all the innovators for doing this work. You have a friend at the FCC who recognizes your innovation and wants to be a partner with it,” he said.

To hear more about how AI will transform industries at the edge of the network, watch a portion of the GTC keynote below by NVIDIA’s  CEO, Jensen Huang.

The post AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC appeared first on The Official NVIDIA Blog.

Read More

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

Pindar Van Arman is a veritable triple threat — he can paint, he can program and he can program robots that paint.

Van Arman first started incorporating robots into his artistic method 15 years ago to save time. He coded a robot to paint the beginning stages of an art piece — like ”a printer that can pick up a brush” — to save time.

It wasn’t until Van Arman took part in the DARPA Grand Challenge, a prize competition for autonomous vehicles, that he was inspired to bring AI into his art.

Now, his robots are capable of creating artwork all on their own through the use of deep neural networks and feedback loops. Van Arman is never far away, though, sometimes pausing a robot to adjust its code and provide it some artistic guidance.

Van Arman’s work is on display in the AI Art Gallery at GTC 2020, and he’ll be giving conference attendees a virtual tour of his studio on Oct. 8 at 11 a.m. Pacific time.

Key Points From This Episode:

  • One of Van Arman’s most recent projects is artonomous, an artificially intelligent painting robot that is learning the subtleties of fine art. Anyone can submit their photo to be included in artonomous’ training set.
  • Van Arman predicts that AI will become even more creative, independent of its human creator. He predicts that AI artists will learn to program a variety of coexisting networks that give AI a greater understanding of what defines art.

Tweetables:

“I’m trying to understand myself better by exploring my own creativity — by trying to capture it in code, breaking it down and distilling it” — Pindar Van Arman [4:22]

“I’d say 50% of the paintings are completely autonomous, and 50% of the paintings are directed by me. 100% of them, though, are my art” — Pindar Van Arman [17:20]

You Might Also Like

How Tattoodo Uses AI to Help You Find Your Next Tattoo

Picture this, you find yourself in a tattoo parlor. But none of the dragons, flaming skulls, or gothic font lifestyle mottos you see on the wall seem like something you want on your body. So what do you do? You turn to AI, of course. We spoke to two members of the development team at Tattoodo.com, who created an app that uses deep learning to help you create the tattoo of your dreams.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Robots can do amazing things. Compare even the most advanced robots to a three year old, however, and they can come up short. UC Berkeley Professor Pieter Abbeel has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally.

How AI’s Storming the Fashion Industry

Costa Colbert — who holds degrees ranging from neural science to electrical engineering — is working at MAD Street Den to bring machine learning to fashion. He’ll explain how his team is using generative adversarial networks to create images of models wearing clothes.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020 appeared first on The Official NVIDIA Blog.

Read More

Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries

Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries

Hate hunting and pecking away at your keyboard every time you have a quick question? You’ll love this.

Microsoft’s Bing search engine has turned to Turing-NLG and NVIDIA GPUs to suggest full sentences for you as you type.

Turing-NLG is a cutting-edge, large-scale unsupervised language model that has achieved strong performance on language modeling benchmarks.

It’s just the latest example of an AI technique called unsupervised learning, which makes sense of vast quantities of data by extracting features and patterns without the need for humans to provide any pre-labeled data.

Microsoft calls this Next Phrase Prediction, and it can feel like magic, making full-phrase suggestions in real time for long search queries.

Turing-NLG is among several innovations — from model compression to state caching and hardware acceleration — that Bing has harnessed with Next Phrase Prediction.

Over the summer, Microsoft worked with engineers at NVIDIA to optimize Turing-NLG to their needs, accelerating the model on NVIDIA GPUs to power the feature for users worldwide.

A key part of this optimization was to run this massive AI model extremely fast to power real-time search experience. With a combination of hardware and model optimization Microsoft and NVIDIA achieved an average latency below 10 milliseconds.

By contrast, it takes more than 100 milliseconds to blink your eye.

Learn more about the next wave of AI innovations at Bing.

Before the introduction of Next Phrase Prediction, the approach for handling query suggestions for longer queries was limited to completing the current word being typed by the user.

Now type in “The best way to replace,” and you’ll immediately see three suggestions for completing the phrase: wood, plastic and metal. Type in “how can I replace a battery for,” and you’ll see “iphone, samsung, ipad and kindle” all suggested.

With Next Phrase Prediction, Bing can now present users with full-phrase suggestions.

The more characters you type, the closer Bing gets to what you probably want to ask.

And because these suggestions are generated instantly, they’re not limited to previously seen data or just the current word being typed.

So, for some queries, Bing won’t just save you a few keystrokes — but multiple words.

As a result of this work, the coverage of autosuggestion completions increases considerably, Microsoft reports, improving the overall user experience “significantly.”

The post Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries appeared first on The Official NVIDIA Blog.

Read More

Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse

Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse

For researchers like Max Zimmerman, it was a welcome pile-on to tackle a global pandemic.

A million citizen scientists donated time on their home systems so the Folding@home consortium could calculate the intricate movements of proteins inside the coronavirus. Then a team of NVIDIA simulation experts combined the best tools from multiple industries to let the researchers see their data in a whole new way.

“I’ve been repeatedly amazed with the unprecedented scale of scientific collaborations,” said Zimmerman, a postdoc fellow at the Washington University School of Medicine in St. Louis, which hosts one of eight labs that keep the Folding@home research network humming.

As a result, Zimmerman and colleagues published a paper on BioRxiv, showing images of 17 weak spots in coronavirus proteins that antiviral drug makers can attack. And the high-res simulation of the work continues to educate researchers and the public alike about the bad actor behind the pandemic.

“We are in a position to make serious headway towards understanding the molecular foundations of health and disease,” he added.

An Antiviral Effort Goes Viral

In mid-March, the Folding@home team put many long-running projects on hold to focus on studying key proteins behind COVID. They issued a call for help, and by the end of the month the network swelled to become the world’s first exascale supercomputer, fueled in part by more than 280,000 NVIDIA GPUs.

Researchers harnessed that power to search for vulnerable moments in the rapid and intricate dance of the folding proteins, split-second openings drug makers could exploit. Within three months, computers found many promising motions that traditional experiments could not see.

“We’ve simulated nearly the entire proteome of the virus and discovered more than 50 new and novel targets to aid in the design of antivirals. We have also been simulating drug candidates in known targets, screening over 50,000 compounds to identify 300 drug candidates,” Zimmerman said.

The coronavirus uses cunning techniques to avoid human immune responses, like the Spike protein keeping its head down in a closed position. With the power of an exaflop at their disposal, researchers simulated the proteins folding for a full tenth of a second, orders of magnitude longer than prior work.

Though the time sampled was relatively short, the dataset to enable it was vast.

The SARS-CoV-2 spike protein alone consists of 442,881 atoms in constant motion. In just 1.2 microseconds, it generates about 300 billion timestamps, freeze frames that researchers must track.

Combined with the two dozen other coronavirus proteins they studied, Folding@home amassed the largest collection of molecular simulations in history.

Omniverse Simulates a Coronavirus Close Up

The dataset “ended up on my desk when someone asked what we could do with it using more than the typical scientific tools to really make it shine,” said Peter Messmer, who leads a scientific visualization team at NVIDIA.

Using Visual Molecular Dynamics, a standard tool for scientists, he pulled the data into NVIDIA Omniverse, a platform built for collaborative 3D graphics and simulation soon to be in open beta. Then the magic happened.

The team connected Autodesk’s Maya animation software to Omniverse to visualize a camera path, creating a fly-through of the proteins’ geometric intricacies. The platform’s core technologies such as NVIDIA Material Definition Language (MDL) let the team give tangible surface properties to molecules, creating translucent or glowing regions to help viewers see critical features more clearly.

With Omniverse, “researchers are not confined to scientific visualization tools, they can use the same tools the best artists and movie makers use to deliver a cinematic rendering — we’re bringing these two worlds together,” Messmer said.

Simulation Experts Share Their Story Live

The result was a visually stunning presentation where each spike on a coronavirus protein is represented with more than 1.8 million triangles, rendered by a bank of NVIDIA RTX GPUs.

Zimmerman and Messmer will co-host a live Q&A technical session Oct. 8 at 11 AM PDT to discuss how they developed the simulation that packs nearly 150 million triangles to represent a millisecond in a protein’s life.

The work validates the mission behind Omniverse to create a universal virtual environment that spans industries and disciplines. We’re especially proud to see the platform serve science in the fight against the pandemic.

The experience made Zimmerman “incredibly optimistic about the future of science. NVIDIA GPUs have been instrumental in generating our datasets, and now those GPUs running Omniverse are helping us see our work in a new and vivid way,” he said.

Visit NVIDIA’s COVID-19 Research Hub to learn more about how AI and GPU-accelerated technology continues to fight the pandemic. And watch NVIDIA CEO Jensen Huang describe in a portion of his GTC keynote below how Omniverse is playing a role.

The post Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse

Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse

The line between the physical and virtual worlds is blurring as autonomous vehicle simulation sharpens with NVIDIA Omniverse, our photorealistic 3D simulation and collaboration platform.

During the GPU Technology Conference keynote, NVIDIA founder and CEO Jensen Huang showcased for the first time NVIDIA DRIVE Sim running on NVIDIA Omniverse. DRIVE Sim leverages the cutting-edge capabilities of the platform for end-to-end, physically accurate autonomous vehicle simulation.

Omniverse was architected from the ground up to support multi-GPU, large-scale, multisensor simulation for autonomous machines. It enables ray-traced, physically accurate, real-time sensor simulation with NVIDIA RTX technology.

The video shows a digital twin of a Mercedes-Benz EQS driving a 17-mile route around a recreated version of the NVIDIA campus in Santa Clara, Calif. It includes Highways 101 and 87 and Interstate 280, with traffic lights, on-ramps, off-ramps and merges as well as changes to the time of day, weather and traffic.

To achieve the real-world replica of the testing loop, the real environment was scanned at 5-cm accuracy and recreated in simulation. The hardware, software, sensors, car displays and human-machine interaction were all implemented in simulation in the exact same way as the real world, enabling bit- and timing-accurate simulation.

Physically Accurate Sensor Simulation 

Autonomous vehicle simulation requires accurate physics and light modeling. This is especially critical for simulating sensors, which requires modeling rays beyond the visible spectrum and accurate timing between the sensor scan and environment changes.

Ray tracing is perfectly suited for this, providing realistic lighting by simulating the physical properties of light. And the Omniverse RTX renderer coupled with NVIDIA RTX GPUs enables ray tracing at real-time frame rates.

The capability to simulate light in real time has significant benefits for autonomous vehicle simulation. In the video, the vehicles show complex reflections of objects in the scene — including those not directly in the frame, just as it would in the real world. This also applies to other reflective surfaces such as wet roadways, reflective signs and buildings.

The Mercedes EQS shows the complexity of reflections enabled with ray tracing, including reflections of objects that are in the scene, but not in the frame.

RTX also enables high-fidelity shadows. Typically in virtual environments, shadows are pre-computed or pre-baked. However, to provide a dynamic environment for simulation, pre-baking isn’t possible. RTX enables high-fidelity shadows to be computed at run-time. In the night parking example from the video, the shadows from the lights are rendered directly instead of being pre-baked. This leads to shadows that appear softer and are much more accurate.

Nighttime parking scenarios show the benefit of ray tracing for complex shadows generated by dynamic light sources.

Universal Scene Description

DRIVE Sim is based on Universal Scene Description, an open framework developed by Pixar to build and collaborate on 3D content for virtual worlds.

USD provides a high level of abstraction to describe scenes in DRIVE Sim. For instance, USD makes it easy to define the state of the vehicle (position, velocity, acceleration) and trigger changes based on its proximity to other entities such as a landmark in the scene.

Also, the framework comes with a rich toolset and is supported by most major content creation tools.

Scalability and Repeatability

Most applications for generating virtual environments are targeted to systems with one to two GPUs, such as PC games. While the timing and latency of such architectures may be good enough for consumer games, designing a repeatable simulator for autonomous vehicles requires a much higher level of precision and performance.

Omniverse enables DRIVE Sim to simultaneously simulate multiple cameras, radars and lidars in real time, supporting sensor configurations from Level 2 assisted driving to Level 4 and Level 5 fully autonomous driving.

Together, these new capabilities brought to life by Omniverse deliver a simulation experience that is virtually indistinguishable from reality.

Watch NVIDIA CEO Jensen Huang recap all the news from GTC: 

 

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Simulation Without Limits: DRIVE Sim Levels Up with NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics

Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics

On land, underwater, in the air — even underground and on other planets — new autonomous machines and the applications that run on them are emerging daily.

Robots are working on construction sites to improve safety, they’re on factory floors to enhance logistics and they’re roaming farm rows to pick weeds and harvest crops.

As AI-powered autonomous machines proliferate, a new generation of students and developers will play a critical role in teaching and training these robots how to behave in the real world.

To help people get started, we’ve announced the availability of free online training and AI-certification programs. Aptly timed with World Teacher’s Day, these resources open up the immense potential of AI and robotics teaching and learning.

And there’s no better way to get hands-on learning and experience than with the new Jetson Nano 2GB Developer Kit, priced at just $59. NVIDIA CEO Jensen Huang announced this ultimate starter AI computer during the GPU Technology Conference on Monday. Incredibly affordable, the Jetson Nano 2GB helps make AI accessible to everyone.

New AI Certification Programs for Teachers and Students

NVIDIA offers two AI certification tracks to educators, students and engineers looking to reskill. Both are part of the NVIDIA Deep Learning Institute:

  • NVIDIA Jetson AI Specialist: This certification can be completed by anyone and recognizes competency in Jetson and AI using a hands-on, project-based assessment. This track is meant for engineers looking to reskill and advanced learners to build on their knowledge.
  • NVIDIA Jetson AI Ambassador: This certification is for educators and leaders at robotics institutions. It recognizes competency in teaching AI on Jetson using a project-based assessment and an interview with the NVIDIA team. This track is ideal for educators or instructors to get fully prepared to teach AI to students.

Additionally, the Duckietown Foundation is offering a free edX course on AI and robotics based on the new NVIDIA Jetson Nano 2GB Developer Kit.

“NVIDIA’s Jetson AI certification materials thoroughly cover the fundamentals with the added advantage of hands-on project-based learning,” said Jack Silberman, Ph.D., lecturer at UC San Diego, Jacobs School of Engineering, Contextual Robotics Institute. “I believe these benefits provide a great foundation for students to prepare for university robotics courses and compete in robotics competitions.”

“We know how important it is to provide all students with opportunities to impact the future of technology,” added Christine Nguyen, STEM curriculum director at the Boys & Girls Club of Western Pennsylvania. “We’re excited to utilize the NVIDIA Jetson AI Specialist certification materials with our students as they work towards being leaders in the fields of AI and robotics.”

“Acquiring new technical skills with a hands-on approach to AI learning becomes critical as AIoT drives the demand for interconnected devices and increasingly complex industrial applications,” said Matthew Tarascio, vice president of Artificial Intelligence at Lockheed Martin. “We’ve used the NVIDIA Jetson platform as part of our ongoing efforts to train and prepare our global workforce for the AI revolution.”

By making it easy to “teach the teachers” with hands-on AI learning and experimentation, Jetson is enabling a new generation to build a smarter, safer AI-enabled future.

Watch NVIDIA CEO Jensen Huang recap autonomous machines news at GTC:

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post Do the Robot: Free Online Training, AI Certifications Make It Easy to Learn and Teach Robotics appeared first on The Official NVIDIA Blog.

Read More