UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks

UK Biobank is broadening scientists’ access to high-quality genomic data and analysis by making its massive dataset available in the cloud alongside NVIDIA GPU-accelerated analysis tools.

Used by more than 25,000 registered researchers around the world, UK Biobank is a large-scale biomedical database and research resource with deidentified genetic datasets, along with medical imaging and health record data, from more than 500,000 participants across the U.K.

Regeneron Genetics Center, the high-throughput sequencing center of biotech leader Regeneron, recently teamed up with UK Biobank to sequence and analyze the exomes — all protein-coding portions of the genome — of all the biobank participants.

The Regeneron team used NVIDIA Clara Parabricks, a software suite for secondary genomic analysis of next-generation sequencing data, during the exome sequencing process.

UK Biobank has released 450,000 of these exomes for access by approved researchers, and is now providing scientists six months of free access to Clara Parabricks through its cloud-based Research Analysis Platform. It was developed by bioinformatics platform DNAnexus, which lets scientists use Clara Parabricks running on NVIDIA GPUs in the AWS cloud.

“As demonstrated by Regeneron, GPU acceleration with Clara Parabricks achieves the throughputs, speed and reproducibility needed when processing genomic datasets at scale,” said Dr. Mark Effingham, deputy CEO of UK Biobank. “There are a number of research groups in the U.K. who were pushing for these accelerated tools to be available in our platform for use with our extensive dataset.”

Regeneron Exome Research Accelerated by Clara Parabricks

Regeneron’s researchers used the DeepVariant Germline Pipeline from NVIDIA Clara Parabricks to run their analysis with a model specific to the genetic center’s workflow.

Its researchers identified 12 million coding variants and hundreds of genes associated with health-related traits — certain genes were associated with increased risk for liver disease and eye disease, and others were linked to lower risk of diabetes and asthma.

The unique set of tools the researchers used for high-quality variant detection is available to UK Biobank registered users through the Research Analysis Platform. This capability will allow scientists to harmonize their own exome data with sequenced exome data from UK Biobank by running the same bioinformatics pipeline used to generate the initial reference dataset.

Cloud-Based Platform Improves Equity of Access

Researchers deciphering the genetic codes of humans — and of the viruses and bacteria that infect humans — can often be limited by the computational resources available to them.

UK Biobank is democratizing access by making its dataset open to scientists around the world, with a focus on further extending use by early-career researchers and those in low- and middle-income countries. Instead of researchers needing to download this huge dataset to use on their own compute resources, they can instead tap into UK Biobank’s cloud platform through a web browser.

“We were being contacted by researchers and clinicians who wanted to access UK Biobank data, but were struggling with access to the basic compute needed to work with even relatively small-scale data,” said Effingham. “The cloud-based platform provides access to the world-class technology needed for large-scale exome sequencing and whole genome sequencing analysis.”

Researchers using the platform pay only for the computational cost of their analyses and for storage of new data they generate from the biobank’s petabyte-scale dataset, Effingham said.

Using Clara Parabricks on DNAnexus helps reduce both the time and cost of this genomic analysis, delivering a whole exome analysis that would take nearly an hour of computation on a 32-vCPU machine in less than five minutes — while also reducing cost by approximately 40 percent.

Exome Sequencing Provides Insights for Precision Medicine

For researchers studying links between genetics and disease, exome sequencing is a critical tool — and the UK Biobank dataset includes nearly half a million participant exomes to work with.

The exome is approximately 1.5 percent of the human genome, and consists of all the known genes and their regulatory elements. By studying genetic variation in exomes across a large, diverse population, scientists can better understand the population’s structure, helping researchers address evolutionary questions and describe how the genome works.

With a dataset as large as UK Biobank’s, it is also possible to identify the specific genetic variants associated with inherited diseases, including cardiovascular disease, neurodegenerative conditions and some kinds of cancer.

Exome sequencing can even shed light on potential genetic drivers that might increase or decrease an individual’s risk of severe disease from COVID-19 infection, Effingham said. As the pandemic continues, UK Biobank is adding COVID case data, vaccination status, imaging data and patient outcomes for thousands of participants to its database.

Get started with NVIDIA Clara Parabricks on the DNAnexus-developed UK Biobank Research Analysis Platform. Learn more about the exome sequencing project by registering for this webinar, which takes place Feb. 17 at 8am Pacific.

Subscribe to NVIDIA healthcare news here

Main image shows the freezer facility at UK Biobank where participant samples are stored. Image courtesy of UK Biobank. 

The post UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks appeared first on The Official NVIDIA Blog.

Read More

Animator Lets 3D Characters Get Their Groove on With NVIDIA Omniverse and Reallusion

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to boost their artistic or engineering processes.

Benny Dee

Benjamin Sokomba Dazhi, aka Benny Dee, has learned the ins and outs of the entertainment industry from many angles — first as a rapper, then as a music video director and now as a full-time animator.

After eight years of self-teaching, Dazhi has mastered the art of animation — landing roles as head animator for the film The Legend of Oronpoto, and as creator and director of the Cartoon Network Africa Dance Challenge, a series of dance-along animations that teaches children African-inspired choreography.

Based in north-central Nigeria, Dazhi is building a team for his indie animation studio, JUST ART, which creates animation films focused on action, sci-fi, horror and humor.

Dazhi uses NVIDIA Omniverse — a physically accurate 3D design collaboration platform available with RTX-powered GPUs and part of the NVIDIA Studio suite of tools for creators — with Reallusion’s iClone and Character Creator to supercharge his artistic workflow.

He uses Omniverse Connectors for Reallusion apps for character and prop creation and animation, set dressing and cinematics.

Music, Movies and Masterful Rendering

From animated music videos to clips for action films, Dazhi has a multitude of projects — and accompanying deadlines.

“The main challenges I faced when trying to meet deadlines were long render times and difficulties with software compatibility, but using an Omniverse Connector for Reallusion’s iClone app has been game-changing for my workflow,” he said.

Using Omniverse, Dazhi accomplishes lighting and materials setup, rendering, simulation and post-production processes.

With these tools, it took Dazhi just four minutes to render this clip of a flying car — a task, he said, that would have otherwise taken hours.

“The rendering speed and photorealistic output quality of Omniverse is a breakthrough — and Omniverse apps like Create and Machinima are very user-friendly,” he said.

Such 3D graphics tools are especially important for the development of indie artists, Dazhi added.

“In Nigeria, there are very few animation studios, but we are beginning to grow in number thanks to easy-to-use tools like Reallusion’s iClone, which is the main animation software I use,” he said.

Dazhi plans to soon expand his studio, working with other indie artists via Omniverse’s real-time collaboration feature. Through his films, he hopes to show viewers “that it’s more than possible to make high-end content as an indie artist or small company.”

See Dazhi’s work in the NVIDIA Omniverse Gallery, and hear more about his creative workflow live during a Twitch stream on Jan. 26 at 11 a.m. Pacific.

Creators can download NVIDIA Omniverse for free and get started with step-by-step tutorials on the Omniverse YouTube channel. For additional resources and inspiration, follow Omniverse on Instagram, Twitter and Medium. To chat with the community, check out the Omniverse forums and join our Discord Server.

The post Animator Lets 3D Characters Get Their Groove on With NVIDIA Omniverse and Reallusion appeared first on The Official NVIDIA Blog.

Read More

Vulkan Fan? Six Reasons to Run It on NVIDIA

Many different platforms, same great performance. That’s why Vulkan is a very big deal.

With the release Tuesday of Vulkan 1.3, NVIDIA continues its unparalleled record of day one driver support for this cross-platform GPU application programming interface for 3D graphics and computing.

Vulkan has been created by experts from across the industry working together at the Khronos Group, an open standards consortium. From the start, NVIDIA has worked to advance this effort. NVIDIA’s Neil Trevett has been Khronos president since its earliest days.

“NVIDIA has consistently been at the forefront of computer graphics with new, enhanced tools, and technologies for developers to create rich game experiences,” said Jon Peddie, president of Jon Peddie Research.

“Their guidance and support for Vulkan 1.3 development, and release of a new compatible driver on day one across NVIDIA GPUs contributes to the successful cross-platform functionality and performance for games and apps this new API will bring,” he said.

With a simpler, thinner driver and efficient CPU multi-threading capabilities, Vulkan has less latency and overhead than alternatives, such as OpenGL or older versions of Direct3D.

If you use Vulkan, NVIDIA GPUs are a no-brainer. Here’s why:

  1. NVIDIA consistently provides industry leadership to evolve new Vulkan functionality and is often the first to make leading-edge computer graphics techniques available to developers. This ensures cutting-edge titles are supported on Vulkan and, by extension, made available to more gamers.
  2. NVIDIA designs hardware to provide the fastest Vulkan performance for your games and applications. For example, NVIDIA GPUs perform up over 30 percent faster than the nearest competition on games such as Doom Eternal with advanced rendering techniques such as ray tracing.
  3. NVIDIA provides the broadest range of Vulkan functionality to ensure you can run the games and apps that you want and need. NVIDIA’s production drivers support advanced features such as ray-tracing and DLSS AI rendering across multiple platforms, including Windows and popular Linux distributions like Ubuntu, Kylin and RHEL.
  4. NVIDIA works hard to be the platform of choice for Vulkan development with tools that are often the first to support the latest Vulkan functionality, encouraging apps and games to be optimized first for NVIDIA. NVIDIA Nsight, our suite of development tools, has integrated support for Vulkan, including debugging and optimizing of applications using full ray-tracing functionality. NVIDIA also provides extensive Vulkan code samples, tutorials and best practice guidance so developers can get the very best performance from their code.
  5. NVIDIA makes Vulkan available across a wider range of platforms and hardware than anyone else for easier cross-platform portability. NVIDIA ships Vulkan on PCs, embedded platforms, automotive and the data center. And gamers enjoy ongoing support of the latest Vulkan API changes with older GPUs.
  6. NVIDIA aims to bulletproof your games with highly reliable game-ready drivers. NVIDIA treats Vulkan as a first-class citizen API with focused development and support. In fact, developers can download our zero-day Vulkan 1.3 drivers right now at https://developer.nvidia.com/vulkan-driver.

Look for more details about our commitment and leadership in Vulkan on NVIDIA’s Vulkan web page. And if you’re not already a member of NVIDIA’s Developer Program, sign up. Developers can download new tools and drivers from NVIDIA for Vulkan 1.3 today. 

The post Vulkan Fan? Six Reasons to Run It on NVIDIA appeared first on The Official NVIDIA Blog.

Read More

Meta Works with NVIDIA to Build Massive AI Research Supercomputer

Meta Platforms gave a big thumbs up to NVIDIA, choosing our technologies for what it believes will be its most powerful research system to date.

The AI Research SuperCluster (RSC), announced today, is already training new models to advance AI.

Once fully deployed, Meta’s RSC is expected to be the largest customer installation of NVIDIA DGX A100 systems.

“We hope RSC will help us build entirely new AI systems that can, for example, power real-time voice translations to large groups of people, each speaking a different language, so they could seamlessly collaborate on a research project or play an AR game together,” the company said in a blog.

Training AI’s Largest Models

When RSC is fully built out, later this year, Meta aims to use it to train AI models with more than a trillion parameters. That could advance fields such as natural-language processing for jobs like identifying harmful content in real time.

In addition to performance at scale, Meta cited extreme reliability, security, privacy and the flexibility to handle “a wide range of AI models” as its key criteria for RSC.

Meta RSC system
Meta’s AI Research SuperCluster features hundreds of NVIDIA DGX systems linked on an NVIDIA Quantum InfiniBand network to accelerate the work of its AI research teams.

Under the Hood

The new AI supercomputer currently uses 760 NVIDIA DGX A100 systems as its compute nodes. They pack a total of 6,080 NVIDIA A100 GPUs linked on an NVIDIA Quantum 200Gb/s InfiniBand network to deliver 1,895 petaflops of TF32 performance.

Despite challenges from COVID-19, RSC took just 18 months to go from an idea on paper to a working AI supercomputer (shown in the video below) thanks in part to the NVIDIA DGX A100 technology at the foundation of Meta RSC.



20x Performance Gains

It’s the second time Meta has picked NVIDIA technologies as the base for its research infrastructure. In 2017, Meta built the first generation of this infrastructure for AI research with 22,000 NVIDIA V100 Tensor Core GPUs that handles 35,000 AI training jobs a day.

Meta’s early benchmarks showed RSC can train large NLP models 3x faster and run computer vision jobs 20x faster than the prior system.

In a second phase later this year, RSC will expand to 16,000 GPUs that Meta believes will deliver a whopping 5 exaflops of mixed precision AI performance. And Meta aims to expand RSC’s storage system to deliver up to an exabyte of data at 16 terabytes per second.

A Scalable Architecture

NVIDIA AI technologies are available to enterprises of any size.

NVIDIA DGX, which includes a full stack of NVIDIA AI software, scales easily from a single system to a DGX SuperPOD running on-premises or at a colocation provider. Customers can also rent DGX systems through NVIDIA DGX Foundry.

The post Meta Works with NVIDIA to Build Massive AI Research Supercomputer appeared first on The Official NVIDIA Blog.

Read More

How the Intelligent Supply Chain Broke and AI Is Fixing It

Let’s face it, the global supply chain may not be the most scintillating subject matter. Yet in homes and businesses around the world, it’s quickly become the topic du jour: empty shelves; record price increases; clogged ports and sick truckers leading to disruptions near and far.

The business of organizing resources to supply a product or service to its final user feels like it’s never been more challenged by so many variables. Shortages of raw materials, everything from resin and aluminum to paint and semiconductors, are nearing historic levels. Products that do get manufactured sit on cargo ships or in warehouses due to shortages of containers and workers and truck drivers that help deliver them to their final destinations. And consumer pocketbooks and paychecks are getting squeezed by rising prices.

The $9 trillion logistics industry is responding by investing in automation and using AI and big data to gain more insights throughout the supply chain. Big money is being poured into supply-chain technology startups, which raised $24.3 billion in venture funding in the first three quarters of 2021, 58 percent more than the full-year total for 2020, according to analytics firm PitchBook Data Inc.

Investing in AI

Behind these investments, businesses see technology and accelerated computing as key to finding firmer ground. At Manifest 2022, a logistics and supply chain conference taking place in Las Vegas, the industry is discussing how to refine supply chains and create cost efficiencies using AI and machine learning. Among their goals: address labor shortages, improve throughput in distribution centers, and route deliveries more efficiently.

Take a box of cereal. Getting it from the warehouse to a home has never been more expensive. Employee turnover rates of 30 percent to 46 percent in warehouses and distribution centers are just part of the problem.

To mitigate the challenge, Dematic, a global materials-handling company, is evaluating software from companies like Kinetic Vision, which has developed computer vision applications on the NVIDIA AI platform that add intelligence to automated warehouse systems.

Companies like Kinetic Vision and SF Technology use video data from cameras to optimize every step of the package lifecycle, accelerating throughput by up to 20 percent and reducing conveyor downtime, which can cost retailers $3,000 to $5,000 a minute.

Autonomous robot companies such as Gideon, 6 River Systems and Symbotic also use the NVIDIA AI platform to improve distribution center throughput with their autonomous guided vehicles that transport material efficiently within the warehouse or distribution centers.

And with NVIDIA Fleet Command, which securely deploys, manages and scales AI applications via the cloud across distributed edge infrastructure, these solutions can be remotely deployed and managed securely and at scale across hundreds of distribution centers.

Digital Twins and Simulation

Improving layouts of stores and distribution centers also has become key to achieving cost efficiencies. NVIDIA Omniverse, a virtual world simulation and 3D design collaboration platform, makes it possible to virtually design and simulate distribution centers at full fidelity. Users can improve workflows and throughput with photorealistic, physically accurate virtual environments.

Retailers could, for example, develop a solution on the Omniverse platform to design, test and simulate the flow of material and employee processes in digital twins of their distribution centers and then bring those optimizations into the real world.

Digital human simulations could test new workflows for employee ergonomics and productivity. And robots are trained and operated with the NVIDIA Isaac robotics platform, creating the most efficient layout and workflows.

Kinetic Vision is using NVIDIA Omniverse to deliver digital twins technology and simulation to optimize factories and retail and consumer packaged goods distribution centers.

Leaning In

While manufacturers, supply chain operators and retailers each will have their own approaches to solving challenges, they’re leaning in on AI as a key differentiator.

Successfully implementing AI-enabled supply-chain management has enabled early adopters to improve logistics costs by 15 percent, inventory levels by 35 percent and service levels by 65 percent, compared with slower-moving competitors, according to McKinsey.

With some experts predicting the global supply chain won’t return to a new normal until at least 2023, companies are moving to take measures that matter most to the bottom line.

For more on how NVIDIA AI is powering the most innovative AI solutions for the supply chain and logistics industry attend the following talks at Manifest:

  • A fireside chat, “Bringing Agility and Flexibility to Distribution Centers with AI,” on Wednesday, Jan. 26, at 2 p.m. Pacific, in Champagne 4 with Azita Martin, vice president and general manager of AI for retail at NVIDIA, and Michael Larsson, CEO of North America region at Dematic.
  • A presentation “The Next Frontier in Warehouse Intelligence” on the same date, at 11:30 a.m. Pacific, in Champagne 4 with Azita Martin and Omer Rashid, vice president of Solutions Designs at DHL Supply Chain, and Renato Bottiglieri, chief logistics officer at Eggo Kitchen & House.

The post How the Intelligent Supply Chain Broke and AI Is Fixing It appeared first on The Official NVIDIA Blog.

Read More

NVIDIA GPUs Enable Simulation of a Living Cell

Every living cell contains its own bustling microcosm, with thousands of components responsible for energy production, protein building, gene transcription and more.

Scientists at the University of Illinois at Urbana-Champaign have built a 3D simulation that replicates these physical and chemical characteristics at a particle scale — creating a fully dynamic model that mimics the behavior of a living cell.

Published in the journal Cell, the project simulates a living minimal cell, which contains a pared-down set of genes essential for the cell’s survival, function and replication. The model uses NVIDIA GPUs to simulate 7,000 genetic information processes over a 20-minute span of the cell cycle – making it what the scientists believe is the longest, most complex cell simulation to date.

Minimal cells are simpler than naturally occurring ones, making them easier to recreate digitally.

“Even a minimal cell requires 2 billion atoms,” said Zaida Luthey-Schulten, chemistry professor and co-director of the university’s Center for the Physics of Living Cells. “You cannot do a 3D model like this in a realistic human time scale without GPUs.”

Once further tested and refined, whole-cell models can help scientists predict how changes to the conditions or genomes of real-world cells will affect their function. But even at this stage, minimal cell simulation can give scientists insight into the physical and chemical processes that form the foundation of living cells.

“What we found is that fundamental behaviors emerge from the simulated cell — not because we programmed them in, but because we had the kinetic parameters and lipid mechanisms correct in our model,” she said.

Lattice Microbes, the GPU-accelerated software co-developed by Luthey-Schulten and used to simulate the 3D minimal cell, is available on the NVIDIA NGC software hub.

Minimal Cell With Maximum Realism

To build the living cell model, the Illinois researchers simulated one of the simplest living cells, a parasitic bacteria called mycoplasma. They based the model on a trimmed-down version of a mycoplasma cell synthesized by scientists at J. Craig Venter Institute in La Jolla, Calif., which had just under 500 genes to keep it viable.

For comparison, a single E. coli cell has around 5,000 genes. A human cell has more than 20,000.

Luthy-Schulten’s team then used known properties of the mycoplasma’s inner workings, including amino acids, nucleotides, lipids and small molecule metabolites to build out the model with DNA, RNA, proteins and membranes.

“We had enough of the reactions that we could reproduce everything known,” she said.

Using Lattice Microbes software on NVIDIA Tensor Core GPUs, the researchers ran a 20-minute 3D simulation of the cell’s life cycle, before it starts to substantially expand or replicate its DNA. The model showed that the cell dedicated most of its energy to transporting molecules across the cell membrane, which fits its profile as a parasitic cell.

“If you did these calculations serially, or at an all-atom level, it’d take years,” said graduate student and paper lead author Zane Thornburg. “But because they’re all independent processes, we could bring parallelization into the code and make use of GPUs.”

Thornburg is working on another GPU-accelerated project to simulate growth and cell division in 3D. The team has recently adopted NVIDIA DGX systems and RTX A5000 GPUs to further accelerate its work, and found that using A5000 GPUs sped up the benchmark simulation time by 40 percent compared to a development workstation with a previous-generation NVIDIA GPU.

Learn more about researchers using NVIDIA GPUs to accelerate science breakthroughs by registering free for NVIDIA GTC, running online March 21-24.

Main image is a snapshot from the 20-minute 3D spatial simulation, showing yellow and purple ribosomes, red and blue degradasomes, and smaller spheres representing DNA polymers and proteins.

The post NVIDIA GPUs Enable Simulation of a Living Cell appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday: ‘Tom Clancy’s Rainbow Six Extraction’ Charges Into GeForce NOW

Hello, Operator.

This GFN Thursday brings the launch of Tom Clancy’s Rainbow Six Extraction to GeForce NOW.

Plus, four new games are joining the GeForce NOW library to let you start your weekend off right.

Your New Mission, Should You Choose to Accept It

Grab your gadgets and get ready to game. Tom Clancy’s Rainbow Six Extraction releases today and is available to stream on GeForce NOW with DLSS for higher frame rates and beautiful, sharp images.

Join millions of players in the Rainbow Six universe. Charge in on your own or battle with buddies in a squad of up to three in thrilling co-op gameplay.

Select from 18 different Operators with specialized skills and progression paths that sync with your strategy to take on different challenges. Play riveting PvE on detailed containment zones, collect critical information and fight an ever-evolving, highly lethal alien threat known as the Archaeans that’s reshaping the battlefield.

Playing With the Power of GeForce RTX 3080

Members can stream Tom Clancy’s Rainbow Six Extraction and the 1,100+ games on the GeForce NOW library, including nearly 100 free-to-play titles, with all of the perks that come with the new GeForce NOW RTX 3080 membership.

Rainbow Six Extraction on GeForce NOW
Build your team, pick your strategy and complete challenging missions in Tom Clancy’s Rainbow Six Extraction.

This new tier of service allows members to play across their devices – including underpowered PCs, Macs, Chromebooks, SHIELD TVs, Android devices, iPhones or iPads – with the power of GeForce RTX 3080. That means benefits like ultra-low latency and eight-hour gaming sessions — the longest available — for a maximized experience on the cloud.

Plus, RTX 3080 members have the ability to fully control and customize in-game graphics settings, with RTX ON rendering environments in cinematic quality for supported games.

For more information, check out our membership FAQ.

New Games Dropping This Week

Garfield Kart Furious Racing on GeForce NOW
It’s fast. It’s furry. It’s Garfield Kart – Furious Racing.

The fun doesn’t stop. Members can look for the following titles joining the GFN Thursday library this week:

  • Tom Clancy’s Rainbow Six Extraction (New release on Ubisoft Connect, Jan. 20)
  • Blacksmith Legends (Steam)
  • Fly Corp (Steam)
  • Garfield Kart – Furious Racing (Steam)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Finally, we’ve got a question for you and your gaming crew this week. Talk to us on Twitter or in the comments below.

The post GFN Thursday: ‘Tom Clancy’s Rainbow Six Extraction’ Charges Into GeForce NOW appeared first on The Official NVIDIA Blog.

Read More

Van, Go: Pony.ai Unveils Next-Gen Robotaxi Fleet Built on NVIDIA DRIVE Orin

Robotaxis are on their way to delivering safer transportation, driving across various landscapes and through starry nights.

This week, Silicon Valley-based self-driving startup Pony.ai announced its next-generation autonomous computing platform, built on NVIDIA DRIVE Orin for high-performance and scalable compute. The centralized system will serve as the brain for a robotaxi fleet of Toyota Sienna multipurpose vehicles (MPVs), marking a major leap forward for the nearly six-year-old company.

The AI compute platform enables multiple configurations for scalable autonomous driving development, all the way to level 4 self-driving vehicles.

“By leveraging the world-class NVIDIA DRIVE Orin SoC, we’re demonstrating our design and industrialization capabilities and ability to develop and deliver a powerful mass-production platform at an unprecedented scale,” said James Peng, co-founder and CEO of Pony.ai, which is developing autonomous systems for both robotaxis and trucks.

The transition to DRIVE Orin has significantly accelerated the company’s plans to deploy safer, more efficient robotaxis, with road testing set to begin this year in China and commercial rollout planned for 2023.

State-of-the-Art Intelligence

DRIVE Orin serves as the brain of autonomous fleets, enabling them to perceive their environment and continuously improve over time.

Born out of the data center, DRIVE Orin achieves 254 trillions of operations per second, or TOPS. It’s designed to handle the large number of applications and deep neural networks that run simultaneously in autonomous trucks, while achieving systematic safety standards such as ISO 26262 ASIL-D.

Pony.ai’s DRIVE Orin-based autonomous computing unit features low latency, high performance and high reliability. It also incorporates a robust sensor solution that contains more than 23 sensors, including solid-state lidars, near-range lidars, radars and cameras.

The Pony.ai next-generation autonomous computing platform, built on NVIDIA DRIVE Orin.

This next-generation, automotive-grade system incorporates redundancy and diversity, maximizing safety while increasing performance and reducing weight and cost over previous iterations.

A Van for All Seasons

The Toyota Sienna MPV is a prime candidate for robotaxi services as it offers flexibility and ride comfort in a sleek package.

Toyota and Pony.ai began co-developing Sienna vehicles purpose-built for robotaxi services in 2019. The custom vehicles feature a dual-redundancy system and better control performance for level 4 autonomous driving capabilities.

The vehicles also debut new concept design cues, including rooftop signaling units that employ different colors and lighting configurations to communicate the robotaxi’s status and intentions.

This dedicated, future-forward design combined with the high-performance compute of NVIDIA DRIVE Orin lays a strong foundation for the coming generation of safer, more efficient robotaxi fleets.

The post Van, Go: Pony.ai Unveils Next-Gen Robotaxi Fleet Built on NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.

Read More

New NVIDIA AI Enterprise Release Lights Up Data Centers

With a new year underway, NVIDIA is helping enterprises worldwide add modern workloads to their mainstream servers using the latest release of the NVIDIA AI Enterprise software suite.

NVIDIA AI Enterprise 1.1 is now generally available. Optimized, certified and supported by NVIDIA, the latest version of the software suite brings new updates including production support for containerized AI with the NVIDIA software on VMware vSphere with Tanzu, which was previously only available on a trial basis. Now, enterprises can run accelerated AI workloads on vSphere, running in both Kubernetes containers and virtual machines with NVIDIA AI Enterprise to support advanced AI development on mainstream IT infrastructure.

Enterprise AI Simplified with VMware vSphere with Tanzu, Coming Soon to NVIDIA LaunchPad

Among the top customer-requested features in NVIDIA AI Enterprise 1.1 is production support for running on VMware vSphere with Tanzu, which enables developers to run AI workloads on both containers and virtual machines within their vSphere environments. This new milestone in the AI-ready platform curated by NVIDIA and VMware provides an integrated, complete stack of containerized software and hardware optimized for AI, all fully managed by IT.

NVIDIA will soon add VMware vSphere with Tanzu support to the NVIDIA LaunchPad program for NVIDIA AI Enterprise, available at nine Equinix locations around the world. Qualified enterprises can test and prototype AI workloads at no charge through curated labs designed for the AI practitioner and IT admin. The labs showcase how to develop and manage common AI workloads like chatbots and recommendation systems, using NVIDIA AI Enterprise and VMware vSphere, and soon with Tanzu.

“Organizations are accelerating AI and ML development projects and VMware vSphere with Tanzu running NVIDIA AI Enterprise easily empowers AI development requirements with modern infrastructure services,” said Matt Morgan, vice president of Product Marketing, Cloud Infrastructure Business Group at VMware. “This announcement marks another key milestone for VMware and NVIDIA in our sustained efforts to help teams leverage AI across the enterprise.”

Growing Demand for Containerized AI Development

While enterprises are eager to use containerized development for AI, the complexity of these workloads requires orchestration across many layers of infrastructure. NVIDIA AI Enterprise 1.1 provides an ideal solution for these challenges as an AI-ready enterprise platform.

“AI is a very popular modern workload that is increasingly favoring deployment in containers. However, deploying AI capabilities at scale within the enterprise can be extremely complex, requiring enablement at multiple layers of the stack, from AI software frameworks, operating systems, containers, VMs, and down to the hardware,” said Gary Chen, research director, Software Defined Compute at IDC. “Turnkey, full-stack AI solutions can greatly simplify deployment and make AI more accessible within the enterprise.”

Domino Data Lab MLOps Validation Accelerates AI Research and Data Science Lifecycle

The 1.1 release of NVIDIA AI Enterprise also provides validation for the Domino Data Lab Enterprise MLOps Platform with VMware vSphere with Tanzu. This new integration enables more companies to cost-effectively scale data science by accelerating research, model development, and model deployment on mainstream accelerated servers.

“This new phase of our collaboration with NVIDIA further enables enterprises to solve the world’s most challenging problems by putting models at the heart of their businesses,” said Thomas Robinson, vice president of Strategic Partnerships at Domino Data Lab. “Together, we are providing every company the end-to-end platform to rapidly and cost-effectively deploy models enterprise-wide.”

NVIDIA AI Enterprise 1.1 stack diagram
NVIDIA AI Enterprise 1.1 features support for VMware vSphere with Tanzu and validation for the Domino Data Lab Enterprise MLOps Platform.

New OEMs and Integrators Offering NVIDIA-Certified Systems for NVIDIA AI Enterprise

Amidst the new release of NVIDIA AI Enterprise, the industry ecosystem is expanding with the first NVIDIA-Certified Systems from Cisco and Hitachi Vantara, as well as a growing roster of newly qualified system integrators offering solutions for the software suite.

The first Cisco system to be NVIDIA-Certified for NVIDIA AI Enterprise is the Cisco UCS C240 M6 rack server with NVIDIA A100 Tensor Core GPUs. The two-socket, 2RU form factor can power a wide range of storage and I/O-intensive applications, such as big data analytics, databases, collaboration, virtualization, consolidation and high-performance computing.

“At Cisco we are helping simplify customers’ hybrid cloud and cloud-native transformation. NVIDIA-Certified Cisco UCS servers, powered by Cisco Intersight, deliver the best-in-class AI workload experiences in the market,” said Siva Sivakumar, vice president of product management at Cisco. “The certification of the Cisco UCS C240 M6 rack server for NVIDIA AI Enterprise allows customers to add AI using the same infrastructure and management software deployed throughout their data center.”

The first NVIDIA-Certified System from Hitachi Vantara compatible with NVIDIA AI Enterprise is the Hitachi Advanced Server DS220 G2 with NVIDIA A100 Tensor Core GPUs. The general-purpose, dual-processor server is optimized for performance and capacity, and delivers a balance of compute and storage with the flexibility to power a wide range of solutions and applications.

“For many enterprises, cost is an important consideration when deploying new technologies like AI-powered quality control, recommender systems, chatbots and more,” said Dan McConnell, senior vice president, Product Management at Hitachi Vantara. “Accelerated with NVIDIA A100 GPUs and now certified for NVIDIA AI Enterprise, Hitachi Unified Compute Platform (UCP) solutions using the Hitachi Advanced Server DS220 G2 gives customers an ideal path for affordably integrating powerful AI-ready infrastructure to their data centers.”

A broad range of additional server manufacturers offer NVIDIA-Certified Systems for NVIDIA AI Enterprise. These include Atos, Dell Technologies, GIGABYTE, H3C, Hewlett Packard Enterprise, Inspur, Lenovo and Supermicro, all of whose systems feature NVIDIA A100, NVIDIA A30 or other NVIDIA GPUs. Customers can also choose to deploy NVIDIA AI Enterprise on their own servers or on as-a-service bare metal infrastructure from Equinix Metal across nine regions globally.

AMAX, Colfax International, Exxact Corporation and Lambda are the newest system integrators qualified for NVIDIA AI Enterprise, joining a global ecosystem of channel partners that includes Axians, Carahsoft Technology Corp., Computacenter, Insight Enterprises, NTT, Presidio, Sirius, SoftServe, SVA System Vertrieb Alexander GmbH, TD SYNNEX, Trace3 and World Wide Technology.

Enterprises interested in experiencing development with NVIDIA AI Enterprise can apply for instant access to no cost using curated labs via the NVIDIA LaunchPad program, which also features labs using NVIDIA Fleet Command for edge AI, as well as NVIDIA Base Command for demanding AI development workloads.

The post New NVIDIA AI Enterprise Release Lights Up Data Centers appeared first on The Official NVIDIA Blog.

Read More

Fusing Art and Tech: MORF Gallery CEO Scott Birnbaum on Digital Paintings, NFTs and More

Browse through MORF Gallery — virtually or at an in-person exhibition — and you’ll find robots that paint, digital dreamscape experiences, and fine art brought to life by visual effects.

The gallery showcases cutting-edge, one-of-a-kind artwork from award-winning artists who fuse their creative skills with AI, machine learning, robotics and neuroscience.

Scott Birnbaum, CEO and co-founder of MORF Gallery, a Silicon Valley startup, spoke with NVIDIA AI Podcast host Noah Kravitz about digital art, non-fungible tokens, as well as ArtStick, a plug-in device that turns any TV into a premium digital art gallery.

Key Points From This Episode:

  • Artists featured by MORF Gallery create fine art using cutting-edge technology. For example, robots help with mundane tasks like painting backgrounds. Visual effects add movement to still paintings. And machine learning can help make NeoMasters — paintings based on original works that were once lost but resurrected or recreated with AI’s help.
  • The digital art space offers new and expanding opportunities for artists, technologists, collectors and investors. For one, non-fungible tokens, Birnbaum says, have been gaining lots of attention recently. He gives an overview of NFTs and how they authenticate original pieces of digital art.

Tweetables:

Paintbrushes, cameras, computers and AI are all technologies that “move the art world forward … as extensions of human creativity.” — Scott Birnbaum [8:27]

“Technology is enabling creative artists to really push the boundaries of what their imaginations can allow.” — Scott Birnbaum [13:33]

You Might Also Like:

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint

Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including Da Vinci’s Salvador Mundi, with AI’s help.

Researchers Chris Downum and Leszek Pawlowicz Use Deep Learning to Accelerate Archaeology

Researchers in the Department of Anthropology at Northern Arizona University are using GPU-based deep learning algorithms to categorize sherds — tiny fragments of ancient pottery.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

 

 

The post Fusing Art and Tech: MORF Gallery CEO Scott Birnbaum on Digital Paintings, NFTs and More appeared first on The Official NVIDIA Blog.

Read More