Data Makes It Beta: Roborace Returns for Second Season with Updateable Self-Driving Vehicles Powered by NVIDIA DRIVE

Data Makes It Beta: Roborace Returns for Second Season with Updateable Self-Driving Vehicles Powered by NVIDIA DRIVE

Amid the COVID-19 pandemic, live sporting events are mostly being held without fans in the stands. At Roborace, they’re removing humans from the field as well, without sacrificing any of the action.

Roborace is envisioning autonomous racing for the future. Teams compete using standardized cars powered by their own AI algorithms in a series of races testing capabilities such as speed and object detection. Last month, the startup launched its Season Beta, running entirely autonomous races and streamed live online for a virtual audience.

This second season features Roborace’s latest vehicle, the Devbot 2.0, a state-of-the-art race car capable of both human and autonomous operation and powered by the NVIDIA DRIVE AGX platform. Devbot was designed by legendary movie designer Daniel Simon, who has envisioned worlds straight out of science fiction for films such as Tron, Thor and Captain America.

Each Season Beta event consists of two races. In the first, teams race their Devbots autonomously with no obstacles. Next, the challenge is to navigate the same track with virtual objects, some of which are time bonuses and others are time penalties. The team with the fastest overall time wins.

One of the virtual objects a vehicle must navigate in Roborace Season Beta.

These competitions are intended to put self-driving technology to the test in the extreme conditions of performance racing, pushing innovation in both AI and the sport of racing itself. Teams from universities around the world have been able to leverage critical data from each race, developing smarter and faster algorithms for each new event.

From the Starting Line

Season Beta’s inaugural event provided the ideal launching point for iterative AI algorithm development.

The first two races took place on Sept. 24 and 25 at the world-renowned Anglesey National Circuit in Wales. Teams from the Massachusetts Institute of Technology, Carnegie Mellon University, University Graz Austria, Technical University Pisa and commercial racing team Acronis all took to the track to put their AV algorithms through their paces.

Racing stars such as Dario Franchitti and commentators Andy McEwan and Matt Roberts helped deliver the electrified atmosphere of high-speed competition to the virtual racing event.

Radio interruptions and other issues kept the teams from completing the race. However, the learnings from Wales are set to make the second installment of Roborace Season Beta a can’t-miss event.

Ready for Round Two

The autonomous racing season continues this week at Thruxton Circuit in Hampshire, U.K. The same set of teams will be joined by a guest team from Warwick Engineering Society and Warwick University for a second chance at AV racing glory.

Sergio Pininfarina, CEO of the legendary performance brand, will join the suite of television presenters to provide color commentary on the races.

The high-performance, energy-efficient NVIDIA DRIVE AGX platform makes it easy to enhance self-driving algorithms and add new deep neural networks for continuous improvement. By leveraging the NVIDIA AI compute platform, Roborace teams can quickly update their vehicles from last month’s race for optimal performance.

Be sure to tune in live from Oct. 28 to Oct. 30 to witness the future of racing in action, catch up on highlights and mark your calendar for the rest of Roborace Season Beta.

The post Data Makes It Beta: Roborace Returns for Second Season with Updateable Self-Driving Vehicles Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

SoftBank Group, NVIDIA CEOs on What’s Next for AI

SoftBank Group, NVIDIA CEOs on What’s Next for AI

Good news: AI will soon be everywhere. Better news: it will be put to work by everyone.

Sharing a vision of AI enabling humankind, NVIDIA CEO Jensen Huang Wednesday joined Masayoshi Son, Chairman and CEO of SoftBank Group Corp. as a guest for his keynote at the annual SoftBank World conference.

“For the first time, we’re going to democratize software programming,” Huang said. “You don’t have to program the computer; you just have to teach the computer.”

Son is a legendary entrepreneur, investor and philanthropist who pioneered the development of the PC industry, the internet and mobile computing in Japan.

A Technological Jewel

The online conversation comes six weeks after NVIDIA agreed to acquire Arm from SoftBank in a transaction valued at $40 billion. Huang described Arm as “one of the technology world’s great jewels” in his conversation with Son.

“The reason why combining Arm and NVIDIA makes so much sense is because we can then bring NVIDIA’s AI to the most popular edge CPU in the world,” Huang said while seated beside the fireplace of his Silicon Valley home.

Arm has long provided its intellectual property to many chipset vendors, who deploy it on many different applications, in many different systems-on-a-chip, or SoCs, Son explained.

Huang said the combined company would “absolutely” continue this.

An Ecosystem Like No Other

“Of course the CPU is fantastic, energy-efficient and it’s improving all the time, thanks to incredible computer scientists building the best CPU in the world,” Huang said. “But the true value of Arm is in the ecosystem of Arm — the 500 companies that use Arm today.”

That ecosystem is growing fast. Son said it won’t be long until a trillion Arm-based SoCs have been shipped. Making NVIDIA AI available to those trillion chipsets “will be an amazing combination,” Son said.

“Our dream is to bring NVIDIA’s AI to Arm’s ecosystem, and the only way to bring it to the Arm ecosystem is through all of the existing customers, licensees and partners,” Huang said. “We would like to offer the licensees more, even more.”

Arm, Son said, provides toolsets to enable companies to create SoCs for very different applications, from game machines and home appliances to robots that fly or run or swim. These devices will, in turn, communicate with cloud AI “so each of them become smarter.”

“That’s the reason why combining Arm and NVIDIA makes so much sense because we can then bring NVIDIA AI to the most popular edge CPU in the world,” Huang said.

‘Intelligence at Scale’

That will allow even more companies to participate in the AI boom.

“AI is a new kind of computer science; the software is different, the chips are different, the methodology is different,” Huang said.

It’s a huge shift, Son agreed.

First, Son said, computers enabled advancements in calculation; next, came the ability to store massive amounts of data; and “now, finally, computers are the ears and the eyes, so they can recognize voice and speech.”

“It’s intelligence at scale,” Huang responded. “That’s the reason why this age of AI is such an important time.”

Extending Human Capabilities

Son and Huang spoke about how enterprises worldwide — from AstraZeneca and GlaxoSmithKline in drug discovery, to American Express in banking, to Walmart in retail, to Microsoft in software, to Kubota in agriculture — are now adopting NVIDIA AI tools.

Huang cited a new generation of systems, called recommender systems, that are already helping humans sort through vast array choices available online in everything from what clothes they wear to what music they listen to.

Huang and Son describe such systems — and AI more broadly — as a way to extend human capabilities.

“Humans will always be in the loop,” Huang said.

“We have a heart, a desire to be nice to other humans,” Son said. “We will utilize AI as a tool, for our happiness, for our joy — humans will choose which recommendations to take.”

‘Perpetually Learning Machines’

Such intelligent systems are being woven into the world around us, through smart, connected systems, or “edge AI,” Son said, which will work hand in hand with powerful cloud AI systems able to aggregate input from devices in the real world.

The result will be a “learning loop,” or “perpetually learning machines,” Huang said.

“The cloud side will aggregate information from edge AI, it will become smarter and smarter,” Son said.

Democratizing AI

One result: computing will finally be democratized, Huang said. Only a small number of people want to pursue a career as a computer programmer, but “everyone can teach,” Huang said.

“You [will] just ask the computer, ‘This is what I want to do, can you give me a solution?,’” Son responded. “Then the computer will give us the solution and the tools to make it happen.”

Such tools will amplify Japan’s strengths in precision engineering and manufacturing.

“This is the time of AI for Japan,” Huang said.

Huang described how, in tools such as NVIDIA Omniverse, a digital factory can be continually optimized.

“This robotic factory will be filled with robots that will build robots in virtual reality,” Huang said. “The whole thing will be simulated … and when you come in in the morning the whole thing will be optimized more than it was when you went to bed.”

Once it’s ready, a physical twin of the digital factory can be built and continually optimized with lessons learned in the virtual one.

“It’s the concept of the metaverse” Son said, referring to the shared, online world of imagined in Neal Stephensen’s 1992 cyberpunk classic, “Snow Crash.”

“… and it’s right in front of us now,” Huang added.

Connecting Humans with One Another

In addition to extending human capabilities with AI, it will help humans better connect with one another.

Video conferencing will soon be the vast majority of the world’s internet traffic, Huang said. Using AI to reconstruct a speaker’s facial expressions can “reduce bandwidth” by a factor of 10.

It can also unleash new capabilities, such as the ability for a speaker to make direct eye contact with 20 different people watching simultaneously, or real-time language translation.

“So you can speak to me in the future in Japanese and I can speak to you in English, and you will hear Japanese and I will hear English,” Huang said.

Enabling Big Dreams

Melding human judgment and AI, adaptive, autonomous machines and tightly connected teams of people will give entrepreneurs, philanthropists and others with “big wishes and big dreams” the ability to tackle ever more ambitious challenges, Huang said.

Son said AI is playing a role in the development of technologies that can detect heart attacks before they happen, speed the discovery of new treatments for cancer, and eliminate car accidents, among others.

“It is a big help,” Son said. “So we should be having a big smile, and big excitement, welcoming this revolution in AI.”

The post SoftBank Group, NVIDIA CEOs on What’s Next for AI appeared first on The Official NVIDIA Blog.

Read More

Listening to the Siren Call: Virginia Tech Works with NVIDIA to Test AV Interactions with Emergency Vehicles

Listening to the Siren Call: Virginia Tech Works with NVIDIA to Test AV Interactions with Emergency Vehicles

Move over, self-driving cars.

The Virginia Tech Transportation Institute has received a federal grant from the U.S. Department of Transportation to study how autonomous vehicles interact with emergency vehicles and public safety providers.

VTTI, the second largest transportation research institute in the country, will use vehicles equipped with the NVIDIA DRIVE Hyperion platform to conduct these evaluations on public roads.

Emergencies or unexpected events can change the flow of traffic in a matter of minutes. Human drivers are trained to listen for sirens and watch for police officers directing traffic; however, this behavior may not be as instinctual to autonomous vehicles.

VTTI is working with NVIDIA as well as a consortium of automotive manufacturers organized through Crash Avoidance Metrics Partners (CAMP LLC) to study challenging and dynamic scenarios involving automated driving systems, such as encounters with public safety providers. Participating CAMP LLC members include General Motors, Ford, Nissan and Daimler. The team will also address ways to facilitate communications between these systems and with their supporting physical infrastructure.

The project will identify solutions and build highly automated Level 4 reference vehicles retrofitted with autonomous driving technology, as well as connected infrastructure to support them. In the final phase, VTTI and its partners will hold demonstrations on Washington, D.C., area highways to showcase the technology safely navigating challenging scenarios.

Safety First

Safely maneuvering around emergency vehicles, including ambulances, fire trucks and police vehicles, is a key component to everyday driving.

The consequences of not doing so are serious. Over the past decade, ambulances experienced an average of about 170 crash-related delays per year, costing precious time in responding to and transporting emergency patients.

Additionally, not moving over for emergency vehicles is illegal. Every state has a “move over” law, requiring vehicles passing stopped police cars, ambulances or utility vehicles to vacate the nearest lane and slow down while passing.

Autonomous vehicles must comply with these traffic norms to deploy safely and at scale. AV fleets will need to be able to identify emergency vehicles, recognize whether lights or sirens are running and obey officers directing traffic.

Leveling Up with DRIVE Hyperion

VTTI will use Level 4 autonomous test vehicles to study how this technology will behave in emergency scenarios, helping determine what measures must be taken in development and infrastructure to facilitate seamless and safe interactions.

NVIDIA DRIVE Hyperion is an autonomous vehicle data collection and perception platform. It consists of a complete sensor suite and NVIDIA DRIVE AGX Pegasus in-car AI computing platform, along with the full software stack for autonomous driving, driver monitoring and visualization.

The high-performance, energy-efficient DRIVE AGX Pegasus AI computer achieves an unprecedented 320 trillion operations per second. The platform is designed and built for Level 4 and Level 5 autonomous systems, like those being tested in the VTTI pilot.

The DRIVE Hyperion developer kit can be integrated into a test vehicle, letting developers use DRIVE AV software and perform data collection for their autonomous vehicle fleet.

Using this technology, researchers can quickly develop a test fleet without having to build from the ground up. The ability to collect data with DRIVE Hyperion also ensures an efficient pipeline of conducting tests and studying the results.

With the collaboration among NVIDIA, VTTI and its automotive partners, this pilot program is slated to significantly advance research on the safe integration of autonomous driving technology into U.S. roadways.

The post Listening to the Siren Call: Virginia Tech Works with NVIDIA to Test AV Interactions with Emergency Vehicles appeared first on The Official NVIDIA Blog.

Read More

Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say

Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say

Hundreds of technology experts from the public and private sectors, as well as academia, came together earlier this month for NVIDIA’s GPU Technology Conference to discuss U.S. federal agency adoption of AI and how industry can help.

Leaders from dozens of organizations, including the U.S. Department of Defense, the Federal Communication Commission, Booz Allen Hamilton, Lockheed Martin, NASA, RAND Corporation, Carnegie Mellon and Stanford Universities, participated in approximately 100 sessions that were part of GTC’s Public Sector Summit.

They talked about the need to accelerate efforts in a number of areas, including education, access to data and computing resources, funding and research. Many encouraged government executives and federal agencies to act with a greater sense of urgency.

“Artificial intelligence is inspiring the greatest technological transformation of our time,” Anthony Robbins, vice president of federal at NVIDIA, said in a panel with former Federal CIO Suzette Kent and retired Lt. Gen. Jack Shanahan during one of the talks focused on “Building an AI Nation.” “The train has left the station,” Robbins said. “In fact, it’s already roaring down the tracks.”

“We’re in a critical period with the United States government,” Shanahan said during the panel. “We have to get it right. This is a really important conversation.”

Just Get Started

These and other speakers cited a common theme: agencies need to get started now. But this requires a cultural shift, which Kent spoke of as one of the most significant challenges she experienced as federal CIO.

“In any kind of transformation the tech is often the easy part,” she said, noting that the only way to get people on board across the U.S. government — one of the largest and most complex institutions in the world — is to focus on return on investment for agency missions.

In a session titled “Why Leaders in Both the Public and Private Sectors Should Embrace Exponential Changes in Data, AI, and Work,” David Bray, former Senior National Intelligence Service Executive, FCC CIO, and current inaugural director and founder of the GeoTech Center at the Atlantic Council, tackled the same topic, saying that worker buy-in was important not just for AI adoption but also for its sustainability.

“If you only treat this as a tech endeavor, you might get it right, but it won’t stick,” Bray said. “What you’re doing isn’t an add-on to agencies — this is transforming how the government does business.”

Make Data a Priority

Data strategy came up repeatedly as an important component to the future of federal AI.

Less than an hour before a GTC virtual fireside chat with Robbins and DoD Chief Data Officer David Spirk, the Pentagon released its first enterprise data strategy.

The document positions the DoD to become a data-centric organization, but implementing the strategy won’t be easy, Spirk said. It will require an incredible amount of orchestration among the numerous data pipelines flowing in and out of the Pentagon and its service branches.

“Data is a strategic asset,” he said. “It’s a high-interest commodity that has to be leveraged for both immediate and lasting advantage.”

Kent and Shanahan agreed that data is critical. Kent said agency chief data officers need to think of the federal government as one large enterprise with a huge repository of data rather than silos of information, considering how the government at large can leverage an agency’s data.

Invest in Exponential Change

The next few years will be crucial for the government’s adoption of AI, and experts say more investment will be needed.

To start, the government will have to address the AI talent gap. The exact extent of the talent shortage is difficult to measure, but job website statistics show that demand for workers far exceeds supply, according to a study by Georgetown University’s Center for Security and Emerging Technology.

One way to do that is for the federal government to set aside money to help small and mid-sized universities develop AI programs.

Another is to provide colleges and universities with access to more computing resources and federal datasets, according to John Etchemendy, co-director of the Human Centered Artificial Intelligence at Stanford University, who spoke during a session with panelists from academia and think tanks. That would accelerate R&D and help students become more proficient at data science.

Government investment in AI research will also be key in helping agencies move forward. Without a significant increase, the United States will fall behind, Martijn Rasser, senior fellow at the Center for New American Security, said during the panel discussion. CNAS recently released a report calling for $25 billion per year in federal AI investment by 2025.

The RAND Corp. released a congressionally mandated assessment of the DoD’s AI posture last year that recommended defense agencies need to create mechanisms for connecting AI researchers, technology developers and operators. By allowing operators to be part of the process at every stage, they’ll be more confident and trusting of the new technology, Danielle Tarraf, senior information scientist at RAND, told the panel. Tarraf highlighted that many of these recommendations were applicable government-wide.

Michael McQuade, vice president of research at Carnegie Mellon University and a member of the Defense Innovation Board, argued that it’s crucial that we start delivering solutions now. “Building confidence is key” to continue to justify the increasing support from authorizers and appropriators for the crucial national investments in Al.

By framing AI in the context of both broad AI innovations and individual use cases, government can elucidate why it’s so important to “knock down barriers and get the money in the right place,” said Seth Center, a senior advisor to the National Security Commission on AI.

An overarching theme from the Public Sector Summit was that government technology leaders need to heighten their focus on AI, with a sense of urgency.

Kent and Shanahan noted that training and tools are available for the government to make the transition smoothly, and begin using the technology. Both said that by partnering with industry and academia, the federal government can make an AI-equipped America a reality.

Bray, noting the breakneck pace of change from new technologies, said that it usually takes decades for the kind of shifts that are now possible. He urged government executives to take an active role in guiding those changes, encouraging them to be “brave, bold and benevolent.”

The post Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say appeared first on The Official NVIDIA Blog.

Read More

Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story

Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story

After his AI-enhanced vintage video went viral, Denis Shiryaev launched a startup to bottle the magic. Soon anyone who wants to dust off their old films may be able to use his neural networks.

The story began with a blog on Telegram by the Russian entrepreneur currently living in Gdańsk, Poland.

“Some years ago I started to blog about machine learning and play with different algorithms to understand it better,” said Shiryaev, who later founded the startup known by its web address, neural.love. “I was generating music with neural nets and staging Turing tests of chatbots — silly, fun stuff.”

Eight months ago, he tried an AI experiment with a short, grainy film he’d found on YouTube of a train in 1896 arriving in a small French town. He used open-source software and AI models to upscale it to 4K resolution and smooth its jerky motion from 15 frames per second to 60 fps.

“I posted it one night, and when I woke up the next day, I had a million views and was on the front page of Reddit. My in-box was exploding with messages on Facebook, LinkedIn — everywhere,” he said of responses to the video below

Not wanting to be a one-hit wonder, he found other vintage videos to work with. He ran them through an expanding workflow of AI models, including DeOldify for adding color and other open-source algorithms for removing visual noise.

His inbox stayed full.

He got requests from a media company in the Netherlands to enhance an old film of Amsterdam. Displays in the Moscow subway played a vintage video he enhanced of the Russian capital. A Polish documentary maker knocked on his door, too.

Even the USA was calling. PBS asked for help with footage for an interactive website for its documentary on women’s suffrage.

“They had a colorist for the still images, but even with advances in digital painting, colorizing film takes a ridiculous amount of time,” said Elizabeth Peck, the business development manager for the five-person team at neural.love.

NVIDIA RTX Speeds AI Work 60x+

Along the way, Shiryaev and team got an upgrade to the latest NVIDIA RTX 6000 GPU. It could process 60 minutes of video in less time than an earlier graphics card took to handle 90 seconds of footage.

The RTX card also trains the team’s custom AI models in eight hours, a job that used to take a week.

“This card shines, it’s amazing how helpful the right hardware can be,” he said.

AI Film Editor in the Cloud

The bright lights the team sees these days are flashing images of a future consumer service in the public cloud. An online self-serve AI video editor could help anyone with a digital copy of an old VHS tape or Super8 reel in their closet.

“People were sending us really touching footage — the last video of their father, a snippet from a Michael Jackson concert they attended as a teenager. The amount of personal interest people had in what we were doing was striking,” explained Peck.

It’s still early days. Shiryaev expects it will take a few months to get a beta service ready for launch.

Meanwhile, neural.love is steering clear of the VC world. “We don’t want to take money until we are sure there is a market and we have a working product,” he said.

You can hear more of neural.love’s story in a webinar hosted by PNY Technologies, an NVIDIA partner.

The post Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story appeared first on The Official NVIDIA Blog.

Read More

What Is Computer Vision?

What Is Computer Vision?

Computer vision has become so good that the days of general managers screaming at umpires in baseball games in disputes over pitches may become a thing of the past.

That’s because developments in image classification along with parallel processing make it possible for computers to see a baseball whizzing by at 95 miles per hour. Pair that with image detection to help geolocate balls, and you’ve got a potent umpire tool that’s hard to argue with.

But computer vision doesn’t stop at baseball.

What Is Computer Vision?

Computer vision is a broad term for the work done with deep neural networks to develop human-like vision capabilities for applications, most often run on NVIDIA GPUs. It can include specific training of neural nets for segmentation, classification and detection using images and videos for data.

Major League Baseball is testing AI-assisted calls at the plate using computer vision. Judging balls and strikes on baseballs that can take just .4 seconds to reach the plate isn’t easy for human eyes. It could be better handled by a camera feed run on image nets and NVIDIA GPUs that can process split-second decisions at a rate of more than 60 frames per second.

Hawk-Eye, based in London, is making this a reality in sports. Hawk-Eye’s NVIDIA GPU-powered ball tracking and SMART software is deployed in more than 20 sports, including baseball, basketball, tennis, soccer, cricket, hockey and NASCAR.

Yet computer vision can do much more than just make sports calls.

What Is Computer Vision Beyond Sports?

Computer vision can handle many more tasks. Developed with convolutional neural networks, computer vision can perform segmentation, classification and detection for a myriad of applications.

Computer vision has infinite applications. With industry changes from computer vision spanning sports, automotive, agriculture, retail, banking, construction, insurance and beyond, much is at stake.

3 Things to Know About Computer Vision

  • Segmentation: Image segmentation is about classifying pixels to belong to a certain category, such as a car, road or pedestrian. It’s widely used in self-driving vehicle applications, including the NVIDIA DRIVE software stack, to show roads, cars and people.  Think of it as a sort of visualization technique that makes what computers do easier to understand for humans.
  • Classification: Image classification is used to determine what’s in an image. Neural networks can be trained to identify dogs or cats, for example, or many other things with a high degree of precision given sufficient data.
  • Detection: Image detection allows computers to localize where objects exist. It puts rectangular bounding boxes — like in the lower half of the image below — that fully contain the object. A detector might be trained to see where cars or people are within an image, for instance, as in the numbered boxes below.

What You Need to Know: Segmentation, Classification and Detection

Segmentation Classification Detection
Good at delineating objects Is it a cat or a dog? Where does it exist in space?
Used in self-driving vehicles Classifies with precision Recognizes things for safety

 

NVIDIA’s Deep Learning Institute offers courses such as Getting Started with Image Segmentation and Fundamentals of Deep Learning for Computer Vision

The post What Is Computer Vision? appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks

NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks

AI-powered vehicles aren’t a future vision, they’re a reality today. And they’re only truly possible on NVIDIA Xavier, our system-on-a-chip for autonomous vehicles.

The key to these cutting-edge vehicles is inference — the process of running AI models in real time to extract insights from enormous amounts of data. And when it comes to in-vehicle inference, NVIDIA Xavier has been proven the best — and the only — platform capable of real-world AI processing, yet again.

NVIDIA GPUs smashed performance records across AI inference in data center and edge computing systems in the latest round of MLPerf benchmarks, the only consortium-based and peer-reviewed inference performance tests. NVIDIA Xavier extended its performance leadership demonstrated in the first AI inference tests, held last year, while supporting all new use cases added for energy-efficient, edge compute SoC.

Inferencing for intelligent vehicles is a full-stack problem. It requires the ability to process sensors and run the neural networks, operating system and applications all at once. This high level of complexity calls for a huge investment, which NVIDIA continues to make.

The new NVIDIA A100 GPU, based on the NVIDIA Ampere architecture, also rose above the competition, outperforming CPUs by up to 237x in data center inference. This level of performance in the data center is critical for training and validating the neural networks that will run in the car at the massive scale necessary for widespread deployment.

Achieving this performance isn’t easy. In fact, most of the companies that have proven the ability to run a full self-driving stack run it on NVIDIA.

The MLPerf tests demonstrate that AI processing capability lies beyond the pure number of trillions of operations per second (TOPS) a platform can achieve. It’s the architecture, flexibility and accompanying tools that define a compute platform’s AI proficiency.

Xavier Stands Alone

The inference tests represent a suite of benchmarks to assess the type of complex workload needed for software-defined vehicles. Many different benchmark tests across multiple scenarios, including edge computing, verify whether a solution can perform exceptionally at not just one task, but many, as would be required in a modern car.

In this year’s tests, NVIDIA Xavier dominated results for energy-efficient, edge compute SoCs — processors necessary for edge computing in vehicles and robots — in both single-stream and multi-stream inference tasks.

Xavier is the current generation SoC powering the brain of the NVIDIA DRIVE AGX computer for both self-driving and cockpit applications. It’s an AI supercomputer, incorporating six different types of processors, including CPU, GPU, deep learning accelerator, programmable vision accelerator, image signal processor and stereo/optical flow accelerator.

NVIDIA DRIVE AGX Xavier

Thanks to its architecture, Xavier stands alone when it comes to AI inference. Its programmable deep neural network accelerators optimally support the operations for high-throughput and low-latency DNN processing. Because these algorithms are still in their infancy, we built the Xavier compute platform to be flexible so it could handle new iterations.

Supporting new and diverse neural networks requires processing different types of data, through a wide range of neural nets. Xavier’s tremendous processing performance handles this inference load to deliver a safe automated or autonomous vehicle with an intelligent user interface.

Proven Effective with Industry Adoption

As the industry compares TOPS of performance to determine autonomous capabilities, it’s important to test how these platforms can handle actual AI workloads.

Xavier’s back-to-back leadership in the industry’s leading inference benchmarks demonstrates NVIDIA’s architectural advantage for AI application development. Our SoC really is the only proven platform up to this unprecedented challenge.

The vast majority of automakers, tier 1 suppliers and startups are developing on the DRIVE platform. NVIDIA has gained much experience running real-world AI applications on its partners’ platforms. All these learnings and improvements will further benefit the NVIDIA DRIVE ecosystem.

Raising the Bar Further

It doesn’t stop there. NVIDIA Orin, our next-generation SoC, is coming next year, delivering nearly 7x the performance of Xavier with incredible energy-efficiency.

NVIDIA Orin

Xavier is compatible with software tools such as CUDA and TensorRT to support the optimization of DNNs to target hardware. These same tools will be available on Orin, which means developers can seamlessly transfer past software development onto the latest hardware.

NVIDIA has shown time and again that it’s the only solution for real-world AI and will continue to drive transformational technology such as self-driving cars for a safer, more advanced future.

The post NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Inference Performance Surges as AI Use Crosses Tipping Point

NVIDIA Inference Performance Surges as AI Use Crosses Tipping Point

Inference, the work of using AI in applications, is moving into mainstream uses, and it’s running faster than ever.

NVIDIA GPUs won all tests of AI inference in data center and edge computing systems in the latest round of the industry’s only consortium-based and peer-reviewed benchmarks.

Data Center tests for MLPerf inference, Oct 2020
NVIDIA A100 and T4 GPUs swept all data center inference tests.

NVIDIA A100 Tensor Core GPUs extended the performance leadership we demonstrated in the first AI inference tests held last year by MLPerf, an industry benchmarking consortium formed in May 2018.

The A100, introduced in May, outperformed CPUs by up to 237x in data center inference, according to the MLPerf Inference 0.7 benchmarks. NVIDIA T4 small form factor, energy-efficient GPUs beat CPUs by up to 28x in the same tests.

To put this into perspective, a single NVIDIA DGX A100 system with eight A100 GPUs now provides the same performance as nearly 1,000 dual-socket CPU servers on some AI applications.

DGX A100 performance vs. CPU servers
Leadership performance enables cost efficiency in taking AI from research to production.

This round of benchmarks also saw increased participation, with 23 organizations submitting — up from 12 in the last round — and with NVIDIA partners using the NVIDIA AI platform to power more than 85 percent of the total submissions.

A100 GPUs, Jetson AGX Xavier Take Performance to the Edge

While A100 is taking AI inference performance to new heights, the benchmarks show that T4 remains a solid inference platform for mainstream enterprise, edge servers and cost-effective cloud instances. In addition, the NVIDIA Jetson AGX Xavier builds on its leadership position in power constrained SoC-based edge devices by supporting all new use cases.

Edge tests for MLPerf Inference Oct 2020
Jetson AGX Xavier joined the A100 and T4 GPUs in leadership performance at the edge.

The results also point to our vibrant, growing AI ecosystem, which submitted 1,029 results using NVIDIA solutions representing 85 percent of the total submissions in the data center and edge categories. The submissions demonstrated solid performance across systems from partners including Altos, Atos, Cisco, Dell EMC, Dividiti, Fujitsu, Gigabyte, Inspur, Lenovo, Nettrix and QCT.

Expanding Use Cases Bring AI to Daily Life

Backed by broad support from industry and academia, MLPerf benchmarks continue to evolve to represent industry use cases. Organizations that support MLPerf include Arm, Baidu, Facebook, Google, Harvard, Intel, Lenovo, Microsoft, Stanford, the University of Toronto and NVIDIA.

The latest benchmarks introduced four new tests, underscoring the expanding landscape for AI. The suite now scores performance in natural language processing, medical imaging, recommendation systems and speech recognition as well as AI use cases in computer vision.

You need go no further than a search engine to see the impact of natural language processing on daily life.

“The recent AI breakthroughs in natural language understanding are making a growing number of AI services like Bing more natural to interact with, delivering accurate and useful results, answers and recommendations in less than a second,” said Rangan Majumder, vice president of search and artificial intelligence at Microsoft.

“Industry-standard MLPerf benchmarks provide relevant performance data on widely used AI networks and help make informed AI platform buying decisions,” he said.

AI Helps Saves Lives in the Pandemic 

The impact of AI in medical imaging is even more dramatic. For example, startup Caption Health uses AI to ease the job of taking echocardiograms, a capability that helped save lives in U.S. hospitals in the early days of the COVID-19 pandemic.

That’s why thought leaders in healthcare AI view models like 3D U-Net, used in the latest MLPerf benchmarks, as key enablers.

“We’ve worked closely with NVIDIA to bring innovations like 3D U-Net to the healthcare market,” said Klaus Maier-Hein, head of medical image computing at DKFZ, the German Cancer Research Center.

“Computer vision and imaging are at the core of AI research, driving scientific discovery and representing core components of medical care. And industry-standard MLPerf benchmarks provide relevant performance data that helps IT organizations and developers accelerate their specific projects and applications,” he added.

Commercially, AI use cases like recommendation systems, also part of the latest MLPerf tests, are already making a big impact. Alibaba used recommendation systems last November to transact $38 billion in online sales on Singles Day, its biggest shopping day of the year.

Adoption of NVIDIA AI Inference Passes Tipping Point

AI inference passed a major milestone this year.

NVIDIA GPUs delivered a total of more than 100 exaflops of AI inference performance in the public cloud over the last 12 months, overtaking inference on cloud CPUs for the first time. Total cloud AI Inference compute capacity on NVIDIA GPUs has been growing roughly tenfold every two years.

NVIDIA hits tipping point for AI acceleration on GPUs in the cloud.
GPUs in major cloud services now account for more inference performance than CPUs.

With the high performance, usability and availability of NVIDIA GPU computing, a growing set of companies across industries such as automotive, cloud, robotics, healthcare, retail, financial services and manufacturing now rely on NVIDIA GPUs for AI inference. They include American Express, BMW, Capital One, Dominos, Ford, GE Healthcare, Kroger, Microsoft, Samsung and Toyota.

NVIDIA's AI inference customers
Companies across key industry sectors use NVIDIA’s AI platform for inference.

Why AI Inference Is Hard

Use cases for AI are clearly expanding, but AI inference is hard for many reasons.

New kinds of neural networks like generative adversarial networks are constantly being spawned for new use cases and the models are growing exponentially. The best language models for AI now encompass billions of parameters, and research in the field is still young.

These models need to run in the cloud, in enterprise data centers and at the edge of the network. That means the systems that run them must be highly programmable, executing with excellence across many dimensions.

NVIDIA founder and CEO Jensen Huang compressed the complexities in one word: PLASTER. Modern AI inference requires excellence in Programmability, Latency, Accuracy, Size of model, Throughput, Energy efficiency and Rate of learning.

To power excellence across every dimension, we’re focussed on constantly evolving our end-to-end AI platform to handle demanding inference jobs.

AI Requires Performance, Usability

An accelerator like the A100, with its third-generation Tensor Cores and the flexibility of its multi-instance GPU architecture, is just the beginning. Delivering leadership results requires a full software stack.

NVIDIA’s AI software begins with a variety of pretrained models ready to run AI inference. Our Transfer Learning Toolkit lets users optimize these models for their particular use cases and datasets.

NVIDIA TensorRT optimizes trained models for inference. With 2,000 optimizations, it’s been downloaded 1.3 million times by 16,000 organizations.

The NVIDIA Triton Inference Server provides a tuned environment to run these AI models supporting multiple GPUs and frameworks. Applications just send the query and the constraints — like the response time they need or throughput to scale to thousands of users — and Triton takes care of the rest.

These elements run on top of CUDA-X AI, a mature set of software libraries based on our popular accelerated computing platform.

Getting a Jump-Start with Applications Frameworks

Finally, our application frameworks jump-start adoption of enterprise AI across different industries and use cases.

Our frameworks include NVIDIA Merlin for recommendation systems, NVIDIA Jarvis for conversational AI, NVIDIA Maxine for video conferencing, NVIDIA Clara for healthcare, and many others available today.

These frameworks, along with our optimizations for the latest MLPerf benchmarks, are available in NGC, our hub for GPU-accelerated software that runs on all NVIDIA-certified OEM systems and cloud services.

In this way, the hard work we’ve done benefits the entire community.

The post NVIDIA Inference Performance Surges as AI Use Crosses Tipping Point appeared first on The Official NVIDIA Blog.

Read More

Taking It to the MAX: Adobe Photoshop Gets New NVIDIA AI-Powered Neural Filters

Taking It to the MAX: Adobe Photoshop Gets New NVIDIA AI-Powered Neural Filters

3D artists and video editors have long used real-time AI features to improve their work and speed up how they turn inspiration into finished art. Now, those benefits are extending to Adobe Photoshop users with the introduction of GPU-accelerated neural filters.

These AI-powered tools, leveraging NVIDIA RTX GPUs with the Adobe creative applications, are being showcased at Adobe MAX, which is bringing together creators from around the world virtually through Oct. 22.

Neural filters are a new feature set for artists to try AI-powered tools that enable them to explore creative ideas and make amazing, complex adjustments to images in just seconds. Done manually, these adjustments would take artists hours of tedious work. AI allows artists to make these changes almost instantaneously.

NVIDIA GPUs accelerate nearly all these new filters. We’ll explain how to get the most out of them at a session at Adobe MAX.

Adobe and NVIDIA are closely collaborating on AI technology to improve creative tools in Creative Cloud and Photoshop. This collaboration includes the new Smart Portrait Filter, which is powered by NVIDIA StyleGAN2 technology and runs best on NVIDIA RTX GPUs.

With Smart Portrait in Photoshop, artists can easily experiment, making edits to facial characteristics, such as gaze direction and lighting angles, simply by dragging a slider. These types of complex corrections and adjustments would typically entail multiple manual steps. But Smart Portrait uses AI — based on a deep neural network developed by NVIDIA Research and trained on numerous portrait images — to achieve breathtaking results in seconds.

This gives artists greater flexibility with their images long after the photo shoot has ended. And they retain full control over their work with a non-destructive workflow, while the effects blend naturally into the original image.

Video editors in Adobe Premiere Pro also benefit from NVIDIA RTX GPUs with virtually all GPU-accelerated decoding offloaded to dedicated VRAM, resulting in smoother video playback and sharper responsiveness when scrubbing through footage, especially with ultra-high resolution and multistream footage. Advanced, AI-powered features such as Scene Edit Detection and Auto Reframe automate manual tasks, speeding up final exports and saving editors valuable time.

For the first time, Adobe Premiere Elements adds GPU acceleration to enable instant playback of popular video effects such as adding a lens flare or an animated overlay, cropping of videos, and overall playback in real-time, all without prerendering, rapidly speeding up the editing process.

AI and GPU-accelerated workflows are the result of the ongoing collaboration between teams at NVIDIA and Adobe. Over the years, we’ve developed tools and helped accelerate workflows in Adobe Photoshop, Lightroom, Premiere Pro, After Effects, Illustrator, Dimension, Substance Alchemist, Substance Painter and Substance Designer. As Adobe continues to build amazing software experiences, NVIDIA will be there to power and accelerate them, giving creators more time for creativity.

Working Smarter: Tapping into AI to Boost Creativity

Adobe is hosting more than 350 sessions across 10 tracks at this year’s MAX conference. Creators looking for new ways to improve their work while cutting down on the tasks that take away precious time can learn how to get the most out of new AI tools across Adobe creative apps.

NVIDIA is hosting an Adobe MAX session where attendees will discover new ways to tap into the power of AI. Whether a graphic artist, video editor, motion graphics professional, Photoshop professional, concept artist or other creator who needs computing speed, you’ll leave with valuable, time-saving tips.

Session attendees will discover:

  • How to improve creations with more precision, clarity and quality
  • How to let AI do the work under the hood, giving you more time to create
  • The NVIDIA Studio ecosystem of tools and products designed to supercharge creativity

Visit the session catalog to learn more and tune in on Wednesday, Oct. 21, from 11-11:30 a.m. Pacific time.

October Studio Driver Ready For Download

Alongside these updates to Adobe Photoshop, Adobe Premiere Pro and Adobe Premiere Elements, there are new releases of Adobe After Effects, Adobe Substance Alchemist, Notch and Daz 3D — all supported in the new October NVIDIA Studio Driver. Studio Drivers are built specifically for creators and tested extensively against top creative apps and workflows.

Download the new Studio Driver (release 456.71) today through GeForce Experience or from the driver download page.

Learn more about NVIDIA Studio hardware and software for creators on the NVIDIA Studio website.

You can also stay up to date on the latest apps through NVIDIA’s Studio YouTube channel, featuring tutorials, tips and tricks by industry-leading artists.

The post Taking It to the MAX: Adobe Photoshop Gets New NVIDIA AI-Powered Neural Filters appeared first on The Official NVIDIA Blog.

Read More

NVIDIA, Zoom CEOs Talk the Future of Work

NVIDIA, Zoom CEOs Talk the Future of Work

Amid a pandemic that’s put much of the world’s work, learning, even family reunions online, two of the leaders who have made today’s virtual world possible met Thursday on, where else — Zoom — to talk about what’s next.

NVIDIA CEO Jensen Huang and Zoom CEO Eric Yuan spoke Thursday at the online video conference company’s Zoomtopia user event in a casual, wide-ranging conversation.

“If not for what Zoom has done, the recent pandemic would be unbearable,” Huang said. The present situation, Huang explained, “has accelerated the future, it has brought forward the urgency of a digital future.”

In front of a virtual audience from all over the globe, the two spoke about their entrepreneurial journeys, NVIDIA’s unique company culture, and how NVIDIA is knitting together the virtual and real worlds to help NVIDIA employees collaborate.

Huang’s appearance at Zoomtopia follows NVIDIA’s GPU Technology Conference last week, where Huang outlined NVIDIA’s view of data center computing and introduced new technologies in data centers, edge AI and healthcare.

Yuan playfully wore a leather jacket, matching Huang’s trademark attire—and briefly displayed a sleek virtual kitchen as his backdrop, paying tribute to the presentations Huang has given from his kitchen this year—began their conversation with Huang by asking about his early life.

“I was fortunate that my parents worked hard and all of the people I was surrounded by worked hard,” Huang said, adding that he was focused on on school and sports, especially table tennis. “To me working is living, working is breathing and, to me, it’s not work at all — I enjoy it too much.”

It’s NVIDIA’s mission, Huang said, that continues to motivate him, as the company has gone from inventing the GPU to pioneering new possibilities in robotics and AI.

The common thread: since the beginning, NVIDIA has had a singular focus on accelerated computing.

“We built a time machine,” Huang said, touching on NVIDIA’s work in drug discovery as an example. “So, instead of a particular drug taking 10 years to discover, we would like drugs and therapies and vaccines to be discovered in months.”

Zoom and NVIDIA, Huang said, share a “singular purpose and a sense of destiny,” Huang said, one that has made the world a better place.

“The fact that Zoom existed and your vision came to reality means we can be together even if we’re not together,” Huang said.

“You can look at your work and imagine the impact on society and the benefits it will bring and somehow it’s your job to do it,” Huang said. “If you don’t do it, no one else will — and that’s thrilling to me, I love that feeling.”

Yuan also asked about NVIDIA’s culture and the future of work, one which Huang believes will increasingly meld the physical and the virtual worlds.

Today, for example, we might report to your colleagues that we’ll be WFH, or working from home.

Office lingo, however, may change to reflect the new reality, where being at the office isn’t necessarily the norm.

“In the future we will say we’re ‘going to the office,’” Huang said. “Today we say ‘WFH,’ in the future we will say ‘GTO.’”

Tools such as Zoom enable colleagues to meet, face to face, from home, from an office, from anywhere in the world.

More and more, work will take place in a hybrid of office and home, physical and virtual reality.

NVIDIA, for example, has created a platform called NVIDIA Omniverse that lets colleagues working in different places and with different tools collaborate in real time.

“The Adobe world can connect to the Catia world and so on,” Huang said. “We can have different designers working with each other at their homes.”

The present moment has “brought forward the urgency of a digital future, it has made us aware that completely physical is not sufficient, that completely digital is not sufficient,” Huang said. “The future is a mixed reality world.”

The post NVIDIA, Zoom CEOs Talk the Future of Work appeared first on The Official NVIDIA Blog.

Read More