Accelerating Research: Texas A&M Launching Grace Supercomputer for up to 20x Boost

Accelerating Research: Texas A&M Launching Grace Supercomputer for up to 20x Boost

Texas A&M University is turbocharging the research of its scientists and engineers with a new supercomputer powered by NVIDIA A100 Tensor Core GPUs.

The Grace supercomputer — named to honor programming pioneer Grace Hopper — handles almost 20 times the processing of its predecessor, Ada.

Texas A&M’s Grace supercomputing cluster comes as user demand at its High Performance Research Computing unit has doubled since 2016. It now has more than 2,600 researchers seeking to run workloads.

The Grace system promises to enhance A&M’s research capabilities and competitiveness. It will allow A&M researchers to keep pace with current trends across multiple fields enabled by advances in high performance computing.

Researchers at Texas A&M University will have access to the new system in December. Dell Technologies is the primary vendor for the Grace system.

Boosting Research

The new Grace architecture will enable researchers to make leaps with HPC in AI and data science. It also provides a foundation for a workforce in exascale computing, which processes a billion billion calculations per second.

The Grace system is set to support the university’s researchers in drug design, materials science, geosciences, fluid dynamics, biomedical applications, biophysics, genetics, quantum computing, population informatics and autonomous vehicles.

“The High Performance Research Computing lab has a mission to infuse computational and data analysis technologies into the research and creative activities of every academic discipline at Texas A&M,” said Honggao Liu, executive director of the facility.

Research at Texas A&M University in 2019 provided $952 million in revenue for the university known for its scholarship and scientific discovery support.

Petaflops Performance

Like its namesake Grace Hopper — whose work in the 1950s led to the COBOL programming language — the new Grace supercomputing cluster will be focused on fueling innovation and making groundbreaking discoveries.

The system boosts processing up to 6.2 petaflops. A one petaflops computer can handle one quadrillion floating point operations per second (flops).

In addition to the A100 GPUs, the Grace cluster is powered by single-precision NVIDIA T4 Tensor Core GPUs and NVIDIA RTX 6000 GPUs in combination with more than 900 Dell EMC PowerEdge servers.

The system is interconnected with NVIDIA Mellanox high-speed, low-latency HDR InfiniBand fabric, enabling smart in-network computing engines for accelerated computing. It also includes 5.12PB of usable high-performance DDN storage running the Lustre parallel file system.

The post Accelerating Research: Texas A&M Launching Grace Supercomputer for up to 20x Boost appeared first on The Official NVIDIA Blog.

Read More

NVIDIA A100 Launches on AWS, Marking Dawn of Next Decade in Accelerated Cloud Computing

NVIDIA A100 Launches on AWS, Marking Dawn of Next Decade in Accelerated Cloud Computing

Amazon Web Services’ first GPU instance debuted 10 years ago, with the NVIDIA M2050. At that time, CUDA-based applications were focused primarily on accelerating scientific simulations, with the rise of AI and deep learning still a ways off.

Since then, AWS has added to its stable of cloud GPU instances, which has included the K80 (p2), K520 (g3), M60 (g4), V100 (p3/p3dn) and T4 (g4).

With its new P4d instance generally available today, AWS is paving the way for another bold decade of accelerated computing powered with the latest NVIDIA A100 Tensor Core GPU.

The P4d instance delivers AWS’s highest performance, most cost-effective GPU-based platform for machine learning training and high performance computing applications. The instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

They also provide exceptional inference performance. NVIDIA A100 GPUs just last month swept the MLPerf Inference benchmarks — providing up to 237x faster performance than CPUs.

Each P4d instance features eight NVIDIA A100 GPUs and, with AWS UltraClusters, customers can get on-demand and scalable access to over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA) and scalable, high-performant storage with Amazon FSx. P4d offers 400Gbps networking and uses NVIDIA technologies such as NVLink, NVSwitch, NCCL and GPUDirect RDMA to further accelerate deep learning training workloads. NVIDIA GPUDirect RDMA on EFA ensures low-latency networking by passing data from GPU to GPU between servers without having to pass through the CPU and system memory.

In addition, the P4d instance is supported in many AWS services, including Amazon Elastic Container Services, Amazon Elastic Kubernetes Service, AWS ParallelCluster and Amazon SageMaker. P4d can also leverage all the optimized, containerized software available from NGC, including HPC applications, AI frameworks, pre-trained models, Helm charts and inference software like TensorRT and Triton Inference Server.

P4d instances are now available in US East and West, and coming to additional regions soon. The instances can be purchased as On-Demand, with Savings Plans, with Reserved Instances, or as Spot Instances.

The first decade of GPU cloud computing has brought over 100 exaflops of AI compute to the market. With the arrival of the Amazon EC2 P4d instance powered by NVIDIA A100 GPUs, the next decade of GPU cloud computing is off to a great start.

NVIDIA and AWS are making it possible for applications to continue pushing the boundaries of AI across a wide array of applications. We can’t wait to see what customers will do with it.

Visit AWS and get started with P4d instances today.

The post NVIDIA A100 Launches on AWS, Marking Dawn of Next Decade in Accelerated Cloud Computing appeared first on The Official NVIDIA Blog.

Read More

‘Marbles at Night’ Illuminates Future of Graphics in NVIDIA Omniverse

‘Marbles at Night’ Illuminates Future of Graphics in NVIDIA Omniverse

Reflections have never looked so good.

Artists are using NVIDIA RTX GPUs to take real-time graphics to the next level, creating visuals with rendered surfaces and light reflections to produce incredible photorealistic details.

The Marbles RTX technology demo, first previewed at GTC in March, ran on a single NVIDIA RTX 8000 GPU. It showcased how complex physics can be simulated in a real-time, ray-traced world.

During the GeForce RTX 30 Series launch event in September, NVIDIA CEO Jensen Huang unveiled a more challenging take on the NVIDIA Marbles RTX project: staging the scene to take place at night and illustrate the effect of hundreds of dynamic, animated lights.

Marbles at Night is a physics-based demo created with dynamic, ray-traced lights and over 100 million polygons. Built in NVIDIA Omniverse and running on a single GeForce RTX 3090 GPU, the final result showed hundreds of different light sources at night, with each marble reflecting lights differently and all happening in real time.

Beyond demonstrating the latest technologies for content creation, Marbles at Night showed how creative professionals can now seamlessly collaborate and design simulations with incredible lighting, accurate reflections and real-time ray tracing with path tracing.

Pushing the Limits of Creativity

A team of artists from NVIDIA collaborated and built the project in NVIDIA Omniverse, the real-time graphics and simulation platform based on NVIDIA RTX GPUs and Pixar’s Universal Scene Description.

Working in Omniverse, the artists were able to upload, store and access all the assets in the cloud, allowing them to easily share files across teams. They could send a link, open the file and work on the assets at the same time.

Every single asset in Marbles at Night was hand-made, modeled and textured from scratch. Marbles RTX Creative Director Gavriil Klimov bought over 200 art supplies and took reference photos of each to capture realistic details, from paint splatter to wear and tear. Texturing — a process that allows artists to transfer details from one model to another — was done entirely in Substance Painter, with multiple variations for each asset.

In Omniverse, the artists manually crafted everything in the Marbles project using RTX Renderer and a variety of creative applications like 3ds Max, Maya, Cinema 4D, ZBrush and Blender. The simulation platform enabled the creative team to view all content at the highest possible quality in real time, resulting in shorter cycles and more iterations.

Nearly a dozen people were working on the project remotely from locations as far afield as California, New York, Australia and Russia. Although the team members were located around the world, Omniverse allowed them to work on scenes simultaneously thanks to Omniverse Nucleus. Running on premises or in the cloud, the module enabled the teams to collaborate in real time across vast distances.

The collaboration-based workflow, combined with the fact the project’s assets were stored in the cloud, made it easier for everyone to access the files and edit in real time.

The final technology demo completed in Omniverse resulted in over 500GB worth of texture data, over 100 unique objects, more than 5,000 meshes and about 100 million polygons.

The Research Behind the Project

NVIDIA Research recently released a paper on the reservoir-based spatiotemporal importance resampling (ReSTIR) technique, which details how to render dynamic direct lighting and shadows from millions of area lights in real time. Inspired by this technique, the NVIDIA rendering team, led by distinguished engineer Ignacio Llamas, implemented an algorithm that allowed Klimov and team to place as many lights as they wanted for the Marbles demo, without being constrained by lighting limits.

“Before, we were limited to using less than 10 lights. But today with Omniverse capabilities using RTX, we were able to place as many lights as we wanted,” said Klimov. “That’s the beauty of it — you can creatively decide what the limit is that works for you.”

Traditionally, artists and developers achieved complex lighting using baked solutions. NVIDIA Research, in collaboration with the Visual Computing Lab at Dartmouth College, produced the research paper that dives into how artists can enable direct lighting from millions of moving lights.

The approach requires no complex light structure, no baking and no global scene parameterization. All the lights can cast shadows, everything can move arbitrarily and new emitters can be added dynamically. This technique is implemented using DirectX Ray Tracing accelerated by NVIDIA RTX and NVIDIA RT Cores.

Get more insights into the NVIDIA Research that’s helping professionals simplify complex design workflows, and learn about the latest announcement of Omniverse, now in open beta.

Additional Resources: 

The post ‘Marbles at Night’ Illuminates Future of Graphics in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

Data Makes It Beta: Roborace Returns for Second Season with Updateable Self-Driving Vehicles Powered by NVIDIA DRIVE

Data Makes It Beta: Roborace Returns for Second Season with Updateable Self-Driving Vehicles Powered by NVIDIA DRIVE

Amid the COVID-19 pandemic, live sporting events are mostly being held without fans in the stands. At Roborace, they’re removing humans from the field as well, without sacrificing any of the action.

Roborace is envisioning autonomous racing for the future. Teams compete using standardized cars powered by their own AI algorithms in a series of races testing capabilities such as speed and object detection. Last month, the startup launched its Season Beta, running entirely autonomous races and streamed live online for a virtual audience.

This second season features Roborace’s latest vehicle, the Devbot 2.0, a state-of-the-art race car capable of both human and autonomous operation and powered by the NVIDIA DRIVE AGX platform. Devbot was designed by legendary movie designer Daniel Simon, who has envisioned worlds straight out of science fiction for films such as Tron, Thor and Captain America.

Each Season Beta event consists of two races. In the first, teams race their Devbots autonomously with no obstacles. Next, the challenge is to navigate the same track with virtual objects, some of which are time bonuses and others are time penalties. The team with the fastest overall time wins.

One of the virtual objects a vehicle must navigate in Roborace Season Beta.

These competitions are intended to put self-driving technology to the test in the extreme conditions of performance racing, pushing innovation in both AI and the sport of racing itself. Teams from universities around the world have been able to leverage critical data from each race, developing smarter and faster algorithms for each new event.

From the Starting Line

Season Beta’s inaugural event provided the ideal launching point for iterative AI algorithm development.

The first two races took place on Sept. 24 and 25 at the world-renowned Anglesey National Circuit in Wales. Teams from the Massachusetts Institute of Technology, Carnegie Mellon University, University Graz Austria, Technical University Pisa and commercial racing team Acronis all took to the track to put their AV algorithms through their paces.

Racing stars such as Dario Franchitti and commentators Andy McEwan and Matt Roberts helped deliver the electrified atmosphere of high-speed competition to the virtual racing event.

Radio interruptions and other issues kept the teams from completing the race. However, the learnings from Wales are set to make the second installment of Roborace Season Beta a can’t-miss event.

Ready for Round Two

The autonomous racing season continues this week at Thruxton Circuit in Hampshire, U.K. The same set of teams will be joined by a guest team from Warwick Engineering Society and Warwick University for a second chance at AV racing glory.

Sergio Pininfarina, CEO of the legendary performance brand, will join the suite of television presenters to provide color commentary on the races.

The high-performance, energy-efficient NVIDIA DRIVE AGX platform makes it easy to enhance self-driving algorithms and add new deep neural networks for continuous improvement. By leveraging the NVIDIA AI compute platform, Roborace teams can quickly update their vehicles from last month’s race for optimal performance.

Be sure to tune in live from Oct. 28 to Oct. 30 to witness the future of racing in action, catch up on highlights and mark your calendar for the rest of Roborace Season Beta.

The post Data Makes It Beta: Roborace Returns for Second Season with Updateable Self-Driving Vehicles Powered by NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

SoftBank Group, NVIDIA CEOs on What’s Next for AI

SoftBank Group, NVIDIA CEOs on What’s Next for AI

Good news: AI will soon be everywhere. Better news: it will be put to work by everyone.

Sharing a vision of AI enabling humankind, NVIDIA CEO Jensen Huang Wednesday joined Masayoshi Son, Chairman and CEO of SoftBank Group Corp. as a guest for his keynote at the annual SoftBank World conference.

“For the first time, we’re going to democratize software programming,” Huang said. “You don’t have to program the computer; you just have to teach the computer.”

Son is a legendary entrepreneur, investor and philanthropist who pioneered the development of the PC industry, the internet and mobile computing in Japan.

A Technological Jewel

The online conversation comes six weeks after NVIDIA agreed to acquire Arm from SoftBank in a transaction valued at $40 billion. Huang described Arm as “one of the technology world’s great jewels” in his conversation with Son.

“The reason why combining Arm and NVIDIA makes so much sense is because we can then bring NVIDIA’s AI to the most popular edge CPU in the world,” Huang said while seated beside the fireplace of his Silicon Valley home.

Arm has long provided its intellectual property to many chipset vendors, who deploy it on many different applications, in many different systems-on-a-chip, or SoCs, Son explained.

Huang said the combined company would “absolutely” continue this.

An Ecosystem Like No Other

“Of course the CPU is fantastic, energy-efficient and it’s improving all the time, thanks to incredible computer scientists building the best CPU in the world,” Huang said. “But the true value of Arm is in the ecosystem of Arm — the 500 companies that use Arm today.”

That ecosystem is growing fast. Son said it won’t be long until a trillion Arm-based SoCs have been shipped. Making NVIDIA AI available to those trillion chipsets “will be an amazing combination,” Son said.

“Our dream is to bring NVIDIA’s AI to Arm’s ecosystem, and the only way to bring it to the Arm ecosystem is through all of the existing customers, licensees and partners,” Huang said. “We would like to offer the licensees more, even more.”

Arm, Son said, provides toolsets to enable companies to create SoCs for very different applications, from game machines and home appliances to robots that fly or run or swim. These devices will, in turn, communicate with cloud AI “so each of them become smarter.”

“That’s the reason why combining Arm and NVIDIA makes so much sense because we can then bring NVIDIA AI to the most popular edge CPU in the world,” Huang said.

‘Intelligence at Scale’

That will allow even more companies to participate in the AI boom.

“AI is a new kind of computer science; the software is different, the chips are different, the methodology is different,” Huang said.

It’s a huge shift, Son agreed.

First, Son said, computers enabled advancements in calculation; next, came the ability to store massive amounts of data; and “now, finally, computers are the ears and the eyes, so they can recognize voice and speech.”

“It’s intelligence at scale,” Huang responded. “That’s the reason why this age of AI is such an important time.”

Extending Human Capabilities

Son and Huang spoke about how enterprises worldwide — from AstraZeneca and GlaxoSmithKline in drug discovery, to American Express in banking, to Walmart in retail, to Microsoft in software, to Kubota in agriculture — are now adopting NVIDIA AI tools.

Huang cited a new generation of systems, called recommender systems, that are already helping humans sort through vast array choices available online in everything from what clothes they wear to what music they listen to.

Huang and Son describe such systems — and AI more broadly — as a way to extend human capabilities.

“Humans will always be in the loop,” Huang said.

“We have a heart, a desire to be nice to other humans,” Son said. “We will utilize AI as a tool, for our happiness, for our joy — humans will choose which recommendations to take.”

‘Perpetually Learning Machines’

Such intelligent systems are being woven into the world around us, through smart, connected systems, or “edge AI,” Son said, which will work hand in hand with powerful cloud AI systems able to aggregate input from devices in the real world.

The result will be a “learning loop,” or “perpetually learning machines,” Huang said.

“The cloud side will aggregate information from edge AI, it will become smarter and smarter,” Son said.

Democratizing AI

One result: computing will finally be democratized, Huang said. Only a small number of people want to pursue a career as a computer programmer, but “everyone can teach,” Huang said.

“You [will] just ask the computer, ‘This is what I want to do, can you give me a solution?,’” Son responded. “Then the computer will give us the solution and the tools to make it happen.”

Such tools will amplify Japan’s strengths in precision engineering and manufacturing.

“This is the time of AI for Japan,” Huang said.

Huang described how, in tools such as NVIDIA Omniverse, a digital factory can be continually optimized.

“This robotic factory will be filled with robots that will build robots in virtual reality,” Huang said. “The whole thing will be simulated … and when you come in in the morning the whole thing will be optimized more than it was when you went to bed.”

Once it’s ready, a physical twin of the digital factory can be built and continually optimized with lessons learned in the virtual one.

“It’s the concept of the metaverse” Son said, referring to the shared, online world of imagined in Neal Stephensen’s 1992 cyberpunk classic, “Snow Crash.”

“… and it’s right in front of us now,” Huang added.

Connecting Humans with One Another

In addition to extending human capabilities with AI, it will help humans better connect with one another.

Video conferencing will soon be the vast majority of the world’s internet traffic, Huang said. Using AI to reconstruct a speaker’s facial expressions can “reduce bandwidth” by a factor of 10.

It can also unleash new capabilities, such as the ability for a speaker to make direct eye contact with 20 different people watching simultaneously, or real-time language translation.

“So you can speak to me in the future in Japanese and I can speak to you in English, and you will hear Japanese and I will hear English,” Huang said.

Enabling Big Dreams

Melding human judgment and AI, adaptive, autonomous machines and tightly connected teams of people will give entrepreneurs, philanthropists and others with “big wishes and big dreams” the ability to tackle ever more ambitious challenges, Huang said.

Son said AI is playing a role in the development of technologies that can detect heart attacks before they happen, speed the discovery of new treatments for cancer, and eliminate car accidents, among others.

“It is a big help,” Son said. “So we should be having a big smile, and big excitement, welcoming this revolution in AI.”

The post SoftBank Group, NVIDIA CEOs on What’s Next for AI appeared first on The Official NVIDIA Blog.

Read More

Listening to the Siren Call: Virginia Tech Works with NVIDIA to Test AV Interactions with Emergency Vehicles

Listening to the Siren Call: Virginia Tech Works with NVIDIA to Test AV Interactions with Emergency Vehicles

Move over, self-driving cars.

The Virginia Tech Transportation Institute has received a federal grant from the U.S. Department of Transportation to study how autonomous vehicles interact with emergency vehicles and public safety providers.

VTTI, the second largest transportation research institute in the country, will use vehicles equipped with the NVIDIA DRIVE Hyperion platform to conduct these evaluations on public roads.

Emergencies or unexpected events can change the flow of traffic in a matter of minutes. Human drivers are trained to listen for sirens and watch for police officers directing traffic; however, this behavior may not be as instinctual to autonomous vehicles.

VTTI is working with NVIDIA as well as a consortium of automotive manufacturers organized through Crash Avoidance Metrics Partners (CAMP LLC) to study challenging and dynamic scenarios involving automated driving systems, such as encounters with public safety providers. Participating CAMP LLC members include General Motors, Ford, Nissan and Daimler. The team will also address ways to facilitate communications between these systems and with their supporting physical infrastructure.

The project will identify solutions and build highly automated Level 4 reference vehicles retrofitted with autonomous driving technology, as well as connected infrastructure to support them. In the final phase, VTTI and its partners will hold demonstrations on Washington, D.C., area highways to showcase the technology safely navigating challenging scenarios.

Safety First

Safely maneuvering around emergency vehicles, including ambulances, fire trucks and police vehicles, is a key component to everyday driving.

The consequences of not doing so are serious. Over the past decade, ambulances experienced an average of about 170 crash-related delays per year, costing precious time in responding to and transporting emergency patients.

Additionally, not moving over for emergency vehicles is illegal. Every state has a “move over” law, requiring vehicles passing stopped police cars, ambulances or utility vehicles to vacate the nearest lane and slow down while passing.

Autonomous vehicles must comply with these traffic norms to deploy safely and at scale. AV fleets will need to be able to identify emergency vehicles, recognize whether lights or sirens are running and obey officers directing traffic.

Leveling Up with DRIVE Hyperion

VTTI will use Level 4 autonomous test vehicles to study how this technology will behave in emergency scenarios, helping determine what measures must be taken in development and infrastructure to facilitate seamless and safe interactions.

NVIDIA DRIVE Hyperion is an autonomous vehicle data collection and perception platform. It consists of a complete sensor suite and NVIDIA DRIVE AGX Pegasus in-car AI computing platform, along with the full software stack for autonomous driving, driver monitoring and visualization.

The high-performance, energy-efficient DRIVE AGX Pegasus AI computer achieves an unprecedented 320 trillion operations per second. The platform is designed and built for Level 4 and Level 5 autonomous systems, like those being tested in the VTTI pilot.

The DRIVE Hyperion developer kit can be integrated into a test vehicle, letting developers use DRIVE AV software and perform data collection for their autonomous vehicle fleet.

Using this technology, researchers can quickly develop a test fleet without having to build from the ground up. The ability to collect data with DRIVE Hyperion also ensures an efficient pipeline of conducting tests and studying the results.

With the collaboration among NVIDIA, VTTI and its automotive partners, this pilot program is slated to significantly advance research on the safe integration of autonomous driving technology into U.S. roadways.

The post Listening to the Siren Call: Virginia Tech Works with NVIDIA to Test AV Interactions with Emergency Vehicles appeared first on The Official NVIDIA Blog.

Read More

Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say

Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say

Hundreds of technology experts from the public and private sectors, as well as academia, came together earlier this month for NVIDIA’s GPU Technology Conference to discuss U.S. federal agency adoption of AI and how industry can help.

Leaders from dozens of organizations, including the U.S. Department of Defense, the Federal Communication Commission, Booz Allen Hamilton, Lockheed Martin, NASA, RAND Corporation, Carnegie Mellon and Stanford Universities, participated in approximately 100 sessions that were part of GTC’s Public Sector Summit.

They talked about the need to accelerate efforts in a number of areas, including education, access to data and computing resources, funding and research. Many encouraged government executives and federal agencies to act with a greater sense of urgency.

“Artificial intelligence is inspiring the greatest technological transformation of our time,” Anthony Robbins, vice president of federal at NVIDIA, said in a panel with former Federal CIO Suzette Kent and retired Lt. Gen. Jack Shanahan during one of the talks focused on “Building an AI Nation.” “The train has left the station,” Robbins said. “In fact, it’s already roaring down the tracks.”

“We’re in a critical period with the United States government,” Shanahan said during the panel. “We have to get it right. This is a really important conversation.”

Just Get Started

These and other speakers cited a common theme: agencies need to get started now. But this requires a cultural shift, which Kent spoke of as one of the most significant challenges she experienced as federal CIO.

“In any kind of transformation the tech is often the easy part,” she said, noting that the only way to get people on board across the U.S. government — one of the largest and most complex institutions in the world — is to focus on return on investment for agency missions.

In a session titled “Why Leaders in Both the Public and Private Sectors Should Embrace Exponential Changes in Data, AI, and Work,” David Bray, former Senior National Intelligence Service Executive, FCC CIO, and current inaugural director and founder of the GeoTech Center at the Atlantic Council, tackled the same topic, saying that worker buy-in was important not just for AI adoption but also for its sustainability.

“If you only treat this as a tech endeavor, you might get it right, but it won’t stick,” Bray said. “What you’re doing isn’t an add-on to agencies — this is transforming how the government does business.”

Make Data a Priority

Data strategy came up repeatedly as an important component to the future of federal AI.

Less than an hour before a GTC virtual fireside chat with Robbins and DoD Chief Data Officer David Spirk, the Pentagon released its first enterprise data strategy.

The document positions the DoD to become a data-centric organization, but implementing the strategy won’t be easy, Spirk said. It will require an incredible amount of orchestration among the numerous data pipelines flowing in and out of the Pentagon and its service branches.

“Data is a strategic asset,” he said. “It’s a high-interest commodity that has to be leveraged for both immediate and lasting advantage.”

Kent and Shanahan agreed that data is critical. Kent said agency chief data officers need to think of the federal government as one large enterprise with a huge repository of data rather than silos of information, considering how the government at large can leverage an agency’s data.

Invest in Exponential Change

The next few years will be crucial for the government’s adoption of AI, and experts say more investment will be needed.

To start, the government will have to address the AI talent gap. The exact extent of the talent shortage is difficult to measure, but job website statistics show that demand for workers far exceeds supply, according to a study by Georgetown University’s Center for Security and Emerging Technology.

One way to do that is for the federal government to set aside money to help small and mid-sized universities develop AI programs.

Another is to provide colleges and universities with access to more computing resources and federal datasets, according to John Etchemendy, co-director of the Human Centered Artificial Intelligence at Stanford University, who spoke during a session with panelists from academia and think tanks. That would accelerate R&D and help students become more proficient at data science.

Government investment in AI research will also be key in helping agencies move forward. Without a significant increase, the United States will fall behind, Martijn Rasser, senior fellow at the Center for New American Security, said during the panel discussion. CNAS recently released a report calling for $25 billion per year in federal AI investment by 2025.

The RAND Corp. released a congressionally mandated assessment of the DoD’s AI posture last year that recommended defense agencies need to create mechanisms for connecting AI researchers, technology developers and operators. By allowing operators to be part of the process at every stage, they’ll be more confident and trusting of the new technology, Danielle Tarraf, senior information scientist at RAND, told the panel. Tarraf highlighted that many of these recommendations were applicable government-wide.

Michael McQuade, vice president of research at Carnegie Mellon University and a member of the Defense Innovation Board, argued that it’s crucial that we start delivering solutions now. “Building confidence is key” to continue to justify the increasing support from authorizers and appropriators for the crucial national investments in Al.

By framing AI in the context of both broad AI innovations and individual use cases, government can elucidate why it’s so important to “knock down barriers and get the money in the right place,” said Seth Center, a senior advisor to the National Security Commission on AI.

An overarching theme from the Public Sector Summit was that government technology leaders need to heighten their focus on AI, with a sense of urgency.

Kent and Shanahan noted that training and tools are available for the government to make the transition smoothly, and begin using the technology. Both said that by partnering with industry and academia, the federal government can make an AI-equipped America a reality.

Bray, noting the breakneck pace of change from new technologies, said that it usually takes decades for the kind of shifts that are now possible. He urged government executives to take an active role in guiding those changes, encouraging them to be “brave, bold and benevolent.”

The post Government Execs Must Be ‘Brave, Bold and Benevolent’ to Hasten AI Adoption, Experts Say appeared first on The Official NVIDIA Blog.

Read More

Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story

Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story

After his AI-enhanced vintage video went viral, Denis Shiryaev launched a startup to bottle the magic. Soon anyone who wants to dust off their old films may be able to use his neural networks.

The story began with a blog on Telegram by the Russian entrepreneur currently living in Gdańsk, Poland.

“Some years ago I started to blog about machine learning and play with different algorithms to understand it better,” said Shiryaev, who later founded the startup known by its web address, neural.love. “I was generating music with neural nets and staging Turing tests of chatbots — silly, fun stuff.”

Eight months ago, he tried an AI experiment with a short, grainy film he’d found on YouTube of a train in 1896 arriving in a small French town. He used open-source software and AI models to upscale it to 4K resolution and smooth its jerky motion from 15 frames per second to 60 fps.

“I posted it one night, and when I woke up the next day, I had a million views and was on the front page of Reddit. My in-box was exploding with messages on Facebook, LinkedIn — everywhere,” he said of responses to the video below

Not wanting to be a one-hit wonder, he found other vintage videos to work with. He ran them through an expanding workflow of AI models, including DeOldify for adding color and other open-source algorithms for removing visual noise.

His inbox stayed full.

He got requests from a media company in the Netherlands to enhance an old film of Amsterdam. Displays in the Moscow subway played a vintage video he enhanced of the Russian capital. A Polish documentary maker knocked on his door, too.

Even the USA was calling. PBS asked for help with footage for an interactive website for its documentary on women’s suffrage.

“They had a colorist for the still images, but even with advances in digital painting, colorizing film takes a ridiculous amount of time,” said Elizabeth Peck, the business development manager for the five-person team at neural.love.

NVIDIA RTX Speeds AI Work 60x+

Along the way, Shiryaev and team got an upgrade to the latest NVIDIA RTX 6000 GPU. It could process 60 minutes of video in less time than an earlier graphics card took to handle 90 seconds of footage.

The RTX card also trains the team’s custom AI models in eight hours, a job that used to take a week.

“This card shines, it’s amazing how helpful the right hardware can be,” he said.

AI Film Editor in the Cloud

The bright lights the team sees these days are flashing images of a future consumer service in the public cloud. An online self-serve AI video editor could help anyone with a digital copy of an old VHS tape or Super8 reel in their closet.

“People were sending us really touching footage — the last video of their father, a snippet from a Michael Jackson concert they attended as a teenager. The amount of personal interest people had in what we were doing was striking,” explained Peck.

It’s still early days. Shiryaev expects it will take a few months to get a beta service ready for launch.

Meanwhile, neural.love is steering clear of the VC world. “We don’t want to take money until we are sure there is a market and we have a working product,” he said.

You can hear more of neural.love’s story in a webinar hosted by PNY Technologies, an NVIDIA partner.

The post Old Clips Become Big Hits: AI-Enhanced Videos Script a Success Story appeared first on The Official NVIDIA Blog.

Read More

What Is Computer Vision?

What Is Computer Vision?

Computer vision has become so good that the days of general managers screaming at umpires in baseball games in disputes over pitches may become a thing of the past.

That’s because developments in image classification along with parallel processing make it possible for computers to see a baseball whizzing by at 95 miles per hour. Pair that with image detection to help geolocate balls, and you’ve got a potent umpire tool that’s hard to argue with.

But computer vision doesn’t stop at baseball.

What Is Computer Vision?

Computer vision is a broad term for the work done with deep neural networks to develop human-like vision capabilities for applications, most often run on NVIDIA GPUs. It can include specific training of neural nets for segmentation, classification and detection using images and videos for data.

Major League Baseball is testing AI-assisted calls at the plate using computer vision. Judging balls and strikes on baseballs that can take just .4 seconds to reach the plate isn’t easy for human eyes. It could be better handled by a camera feed run on image nets and NVIDIA GPUs that can process split-second decisions at a rate of more than 60 frames per second.

Hawk-Eye, based in London, is making this a reality in sports. Hawk-Eye’s NVIDIA GPU-powered ball tracking and SMART software is deployed in more than 20 sports, including baseball, basketball, tennis, soccer, cricket, hockey and NASCAR.

Yet computer vision can do much more than just make sports calls.

What Is Computer Vision Beyond Sports?

Computer vision can handle many more tasks. Developed with convolutional neural networks, computer vision can perform segmentation, classification and detection for a myriad of applications.

Computer vision has infinite applications. With industry changes from computer vision spanning sports, automotive, agriculture, retail, banking, construction, insurance and beyond, much is at stake.

3 Things to Know About Computer Vision

  • Segmentation: Image segmentation is about classifying pixels to belong to a certain category, such as a car, road or pedestrian. It’s widely used in self-driving vehicle applications, including the NVIDIA DRIVE software stack, to show roads, cars and people.  Think of it as a sort of visualization technique that makes what computers do easier to understand for humans.
  • Classification: Image classification is used to determine what’s in an image. Neural networks can be trained to identify dogs or cats, for example, or many other things with a high degree of precision given sufficient data.
  • Detection: Image detection allows computers to localize where objects exist. It puts rectangular bounding boxes — like in the lower half of the image below — that fully contain the object. A detector might be trained to see where cars or people are within an image, for instance, as in the numbered boxes below.

What You Need to Know: Segmentation, Classification and Detection

Segmentation Classification Detection
Good at delineating objects Is it a cat or a dog? Where does it exist in space?
Used in self-driving vehicles Classifies with precision Recognizes things for safety

 

NVIDIA’s Deep Learning Institute offers courses such as Getting Started with Image Segmentation and Fundamentals of Deep Learning for Computer Vision

The post What Is Computer Vision? appeared first on The Official NVIDIA Blog.

Read More

NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks

NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks

AI-powered vehicles aren’t a future vision, they’re a reality today. And they’re only truly possible on NVIDIA Xavier, our system-on-a-chip for autonomous vehicles.

The key to these cutting-edge vehicles is inference — the process of running AI models in real time to extract insights from enormous amounts of data. And when it comes to in-vehicle inference, NVIDIA Xavier has been proven the best — and the only — platform capable of real-world AI processing, yet again.

NVIDIA GPUs smashed performance records across AI inference in data center and edge computing systems in the latest round of MLPerf benchmarks, the only consortium-based and peer-reviewed inference performance tests. NVIDIA Xavier extended its performance leadership demonstrated in the first AI inference tests, held last year, while supporting all new use cases added for energy-efficient, edge compute SoC.

Inferencing for intelligent vehicles is a full-stack problem. It requires the ability to process sensors and run the neural networks, operating system and applications all at once. This high level of complexity calls for a huge investment, which NVIDIA continues to make.

The new NVIDIA A100 GPU, based on the NVIDIA Ampere architecture, also rose above the competition, outperforming CPUs by up to 237x in data center inference. This level of performance in the data center is critical for training and validating the neural networks that will run in the car at the massive scale necessary for widespread deployment.

Achieving this performance isn’t easy. In fact, most of the companies that have proven the ability to run a full self-driving stack run it on NVIDIA.

The MLPerf tests demonstrate that AI processing capability lies beyond the pure number of trillions of operations per second (TOPS) a platform can achieve. It’s the architecture, flexibility and accompanying tools that define a compute platform’s AI proficiency.

Xavier Stands Alone

The inference tests represent a suite of benchmarks to assess the type of complex workload needed for software-defined vehicles. Many different benchmark tests across multiple scenarios, including edge computing, verify whether a solution can perform exceptionally at not just one task, but many, as would be required in a modern car.

In this year’s tests, NVIDIA Xavier dominated results for energy-efficient, edge compute SoCs — processors necessary for edge computing in vehicles and robots — in both single-stream and multi-stream inference tasks.

Xavier is the current generation SoC powering the brain of the NVIDIA DRIVE AGX computer for both self-driving and cockpit applications. It’s an AI supercomputer, incorporating six different types of processors, including CPU, GPU, deep learning accelerator, programmable vision accelerator, image signal processor and stereo/optical flow accelerator.

NVIDIA DRIVE AGX Xavier

Thanks to its architecture, Xavier stands alone when it comes to AI inference. Its programmable deep neural network accelerators optimally support the operations for high-throughput and low-latency DNN processing. Because these algorithms are still in their infancy, we built the Xavier compute platform to be flexible so it could handle new iterations.

Supporting new and diverse neural networks requires processing different types of data, through a wide range of neural nets. Xavier’s tremendous processing performance handles this inference load to deliver a safe automated or autonomous vehicle with an intelligent user interface.

Proven Effective with Industry Adoption

As the industry compares TOPS of performance to determine autonomous capabilities, it’s important to test how these platforms can handle actual AI workloads.

Xavier’s back-to-back leadership in the industry’s leading inference benchmarks demonstrates NVIDIA’s architectural advantage for AI application development. Our SoC really is the only proven platform up to this unprecedented challenge.

The vast majority of automakers, tier 1 suppliers and startups are developing on the DRIVE platform. NVIDIA has gained much experience running real-world AI applications on its partners’ platforms. All these learnings and improvements will further benefit the NVIDIA DRIVE ecosystem.

Raising the Bar Further

It doesn’t stop there. NVIDIA Orin, our next-generation SoC, is coming next year, delivering nearly 7x the performance of Xavier with incredible energy-efficiency.

NVIDIA Orin

Xavier is compatible with software tools such as CUDA and TensorRT to support the optimization of DNNs to target hardware. These same tools will be available on Orin, which means developers can seamlessly transfer past software development onto the latest hardware.

NVIDIA has shown time and again that it’s the only solution for real-world AI and will continue to drive transformational technology such as self-driving cars for a safer, more advanced future.

The post NVIDIA Xavier Shatters Records, Excels in Back-to-Back Performance Benchmarks appeared first on The Official NVIDIA Blog.

Read More