NVIDIA Inception Introduces New and Updated Benefits for Startup Members to Accelerate Computing

This week at GTC, we’re celebrating – celebrating the amazing and impactful work that developers and startups are doing around the world.

Nowhere is that more apparent than among the members of our global NVIDIA Inception program, designed to nurture cutting-edge startups who are revolutionizing industries. The program is free for startups of all sizes and stages of growth, offering go-to-market support, expertise and technology.

Inception members are doing amazing things on NVIDIA platforms across a multitude of areas, from digital twins and climate science, to healthcare and robotics. Now with over 10,000 members in 110 countries, Inception is a true reflection of the global startup ecosystem.

And we’re continuing momentum by offering new benefits to help startups accelerate even more.

Expanded Benefits

Inception members are now eligible for discounts across the NVIDIA Enterprise Software Suite, including NVIDIA AI Enterprise (NVAIE), Omniverse Enterprise and Riva Enterprise. NVAIE is a cloud-native software suite that is optimized, certified and supported by NVIDIA to streamline AI development and deployment. NVIDIA Omniverse Enterprise positions startups to build high-quality 3D tools or to simplify and accelerate complex 3D workflows. NVIDIA Riva Enterprise helps easily develop real-time applications like virtual assistants, transcription services and chatbots.

These discounts provide Inception members greater access to NVIDIA software tools to build computing applications in alignment with their own solutions.

Another new benefit for Inception members is access to special leasing for NVIDIA DGX systems. Available now for members in the U.S., this offers an enhanced opportunity for startups to leverage DGX to deliver leading solutions for enterprise AI infrastructure at scale.

Inception members continue to receive credits and exclusive discounts for technical self-paced courses and instructor-led workshops through the NVIDIA Deep Learning Institute. Upcoming DLI workshops include “Building Conversational AI Applications” and “Applications of AI for Predictive Maintenance” and courses include “Building Real-TIme Video AI Applications” and “Deploying a Model for Inference at Production Scale.”

A Growing Ecosystem

NVIDIA Inception is home for startups to do all types of interesting work, and welcomes developers in every field, area and industry.

Within the program, healthcare is a leading field, with over 1,600 healthcare startups. This is followed closely by over 1,500 IT services startups, more than 825 media and entertainment (M&E) startups and upwards of 800 video analytics startups. More than 660 robotics startups are members of Inception, paving the next wave of AI, through digital and physical robots.

An indicator of Inception’s growing popularity is the increase in startups who are doing work in emerging areas, such as NVIDIA Omniverse, a development platform for 3D design collaboration and real-time, physically accurate simulation, as well as climate sciences and more. Several Inception startups are already developing on the Omniverse platform.

Inception member Charisma is leveraging Omniverse to build digital humans for virtual worlds, games and education. The company enters interactive dialogue into the Omniverse Audio2Face app, tapping into NVIDIA V100 Tensor Core GPUs in the cloud.

Another Inception member, RIOS, helps enterprises automate factories, warehouses and supply chain operations by deploying AI-powered end-to-end robotic workcells. The company is harnessing Isaac Sim on Omniverse, which it also uses for customer deployments.

And RADiCAL is developing computer vision technology focused on detecting and reconstructing 3D human motion from 2D content. The startup is already developing on Omniverse to accelerate its work.

In the field of climate science, many Inception members are also doing revolutionary work to push the boundaries of what’s possible.

Inception member TrueOcean is running NVIDIA DGX A100 systems to develop AI algorithms for predicting quantification of carbon dioxide capture within seagrass meadows as well as for understanding subsea geology. Seagrass meadows can absorb and store carbon in the oxygen-depleted seabed, where it decomposes much slower than on land.

In alignment with NVIDIA’s own plans to build the world’s most powerful AI supercomputer for predicting climate change, Inception member Blackshark provides a semantic, photorealistic 3D digital twin of Earth as a plugin for Unreal Engine, relying on Omniverse as one its platforms for building large virtual geographic environments.

If you’re a startup doing disruptive and exciting development, join NVIDIA Inception today.

Check out GTC sessions on Omniverse and climate change from NVIDIA Inception members. Registration is free. And watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address, which features a new I AM AI video with Inception members HeartDub and PRENAV.

The post NVIDIA Inception Introduces New and Updated Benefits for Startup Members to Accelerate Computing appeared first on NVIDIA Blog.

Read More

NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators

At GTC, NVIDIA announced significant updates for millions of creators using the NVIDIA Omniverse real-time 3D design collaboration platform.

The announcements kicked off with updates to the Omniverse apps Create, Machinima and Showroom, with an immement View release. Powered by GeForce RTX and NVIDIA RTX GPUs, they dramatically accelerate 3D creative workflows.

New Omniverse Connections are expanding the ecosystem and are now available in beta: Unreal Engine 5 Omniverse Connector and the Adobe Substance 3D Material Extension, with the Adobe Substance 3D Painter Omniverse Connector very close behind.

Maxon’s Cinema 4D now has Universal Scene Description (USD) support. Unlocking Cinema 4D workflows via OmniDrive brings deeper integration and flexibility to the Omniverse ecosystem.

Leveraging the hydra render delegate feature, artists can now use Pixar HDStorm, Chaos V-Ray, Maxon Redshift and OTOY Octane Hydra render delegates within the viewport of all Omniverse apps, with Blender Cycles coming soon.

Whether refining 3D scenes or exporting final projects, artists can switch between the lightning-fast Omniverse RTX Renderer or their preferred renderer, giving them ultimate freedom to create however they like.

The Junk Shop by Alex Treviño. Original Concept by Anaïs Maamar. Note Hydra render delegates displayed in the renderer toggle menu.

These updates and more are available today in the Omniverse launcher, free to download, alongside the March NVIDIA Studio Driver release.

To celebrate the Machinima app update, we’re kicking off the #MadeInMachinima contest, in which artists can remix iconic characters from Squad, Mount & Blade II: Bannerlord and Mechwarrior 5 into a cinematic short in Omniverse Machinima to win NVIDIA Studio laptops. The submission window opens on March 29 and runs through June 27. Visit the contest landing page for details.

Can’t Wait to Create

Omniverse Create allows users to interactively assemble full-fidelity scenes by connecting to their favorite creative apps. Artists can add lighting, simulate physically accurate scenes and choose to render with Omniverse’s advanced RTX Renderer, or their favorite Hydra Render delegate.

Create version 2022.1 includes USD support for NURBS curves, a type of curve modeling useful for hair, particles and more. Scenes can now be rendered in passes with arbitrary output variables, or AOVs, delivering more control to artists during the compositing stage.

Animation curve editing is now possible with the addition of a graph editor. The feature helps animators feel comfortable working in creative apps such as Autodesk Maya and Blender. They can iterate simpler, faster and more intuitively.

The new ActionGraph feature unlocks keyboard shortcuts and user-interface buttons to trigger complex events simultaneously.

Apply different colors and textures with ease in Omniverse Create.

NVIDIA PhysX 5.0 updates provide soft and deformable body support for objects such as fabric, jelly and balloons, adding further realism to scenes with no animation necessary.

VMaterials 2.0, a curated collection of MDL materials and lights, now has over 900 physical materials for artists to apply physically accurate, real-world materials to their scenes with just a double click, no shader writing necessary.

Several new Create features are also available in beta:

  • AnimGraph based on OmniGraph brings characters to life with a new graph editor for simple, no-code, realistic animations.
  • New animation retargeting allows artists to map animations from one character to another, automating complex animation tasks such as joint mapping, reference post matching and previewing. When used with AnimGraph, artists can automate character rigging, saving artists countless hours of manual, tedious work.
  • Users can drag and drop assets they own, or click on others to purchase directly from the asset’s product page. Nearly 1 million assets from TurboSquid by Shutterstock, Sketchfab and Reallusion ActorCore are directly searchable in the Omniverse asset browser.

This otherworldly set of features is Create-ing infectious excitement for 3D workflows.

Machinima Magic

Omniverse Machinima 2022.1 beta provides tools for artists to remix, recreate and redefine animated video game storytelling through immersive visualization, collaborative design and photorealistic rendering.

The integration of NVIDIA Maxine’s body pose estimation feature gives users the ability to track and capture motion in real time using a single camera — without requiring a MoCap suit — with live conversion from a 2D camera capture to a 3D model.

Prerecorded videos can now be converted to animations with a new easy-to-use interface.

The retargeting feature applies these captured animations to custom-built skeletons, providing an easy way to animate a character with a webcam. No fancy, expensive device necessary, just a webcam.

Sequencer functionality updates include a new user interface for easier navigation; new tools including splitting, looping, hold and scale; more drag-and-drop functionality to simplify pipelines; and a new audio graph display.

Stitching and building cinematics is now as intuitive as editing video projects.

Step Into the Showroom

Omniverse Showroom 2022.1 includes seven new scenes that invite the newest of users to get started and embrace the incredible possibilities and technology within the platform.

Artists can engage with tech demos showcasing PhysX, rigid and soft body dynamics, flow, combustible fluid, smoke and fire, and blast, featuring destruction and fractures.

Enjoy the View

Omniverse View 2022.1 will enable non-technical project reviewers to collaboratively and interactively review 3D design projects in stunning photorealism, with several astonishing new features.

Markup gives artists the ability to add 2D feedback based on their viewpoint, including shapes and scribbles, for 3D feedback in the cloud.

Turntable places an interactive scene on a virtual table that can be rotated to see how realistic lighting conditions affect the scene in real time, advantageous for high-end movie production and architects.

Teleport and Waypoints allow artists to easily jump around their scenes and preset fully interactive views of Omniverse scenes for sharing.

Omniverse Ecosystem Expansion Continues

New beta Omniverse Connectors and extensions add variety and versatility to 3D creative workflows.

Now available, an Omniverse Connector for Unreal Engine 5 allows live-sync workflows.

The Adobe Substance 3D Material extension is now available, with a beta Substance 3D Painter Omniverse Connector coming soon, enabling artists to achieve more seamless, live-sync texture and material workflows.

Maxon’s Cinema4D now supports USD and is compatible with OmniDrive, unlocking Omniverse workflows for visualization specialists.

Finally, a new CAD importer enables product designers to convert 26 popular CAD formats into Omniverse USD scenes.

More Machinima Magic — With Prizes

The #MadeInMachinima contest asks participants to build scenes and assets — composed of characters from Squad, Mount & Blade II: Bannerlord and Mechwarrior 5 — using Omniverse Machinima.

Legendary Halo Red vs. Blue Studio, Rooster Teeth, produced this magnificent cinematic short in Machinima. Take a look to see what’s possible.

Machinima expertise, while welcome, is not required; this contest is for creators of all levels. Three talented winners will get an NVIDIA Studio laptop, powerful and purpose-built with vivid color displays and blazing-fast memory and storage, to boost future Omniverse sessions.

Machinima will be prominently featured at the Game Developers Conference, where game artists, producers, developers and designers come together to exchange ideas, educate and inspire. At the show, we also launched Omniverse for Developers, providing a more collaborative environment for the creation of virtual worlds.

NVIDIA offers sessions at GDC to assist content creators featuring virtual worlds and AI, real-time ray tracing, and developer tools. Check out the complete list.

Launch or download Omniverse today.

The post NVIDIA Omniverse Upgrade Delivers Extraordinary Benefits to 3D Content Creators appeared first on NVIDIA Blog.

Read More

At GTC: NVIDIA RTX Professional Laptop GPUs Debut, New NVIDIA Studio Laptops, a Massive Omniverse Upgrade and NVIDIA Canvas Update

Digital artists and creative professionals have plenty to be excited about at NVIDIA GTC.

Impressive NVIDIA Studio laptop offerings from ASUS and MSI launch with upgraded RTX GPUs, providing more options for professional content creators to elevate and expand creative possibilities.

NVIDIA Omniverse gets a significant upgrade — including updates to the Omniverse Create, Machinima and Showroom apps; with an upcoming, imminent, View release. A new Unreal Engine Omniverse Connector beta is out now with our Adobe Substance 3D Painter Connector close behind.

Omniverse artists can now use Pixar HDStorm, Chaos V-Ray, Maxon Redshift and OTOY Octane Hydra render delegates within the viewport of all Omniverse apps, bringing more freedom and choice to 3D creative workflows, with Blender Cycles coming soon. Read our Omniverse blog for more details.

NVIDIA Canvas, the beta app sensation using advanced AI to quickly turn simple brushstrokes into realistic landscape images, has received a stylish update.

The March Studio Driver, available for download today, optimizes the latest creative app updates, featuring Blender Cycles 3.1, all with the stability and reliability NVIDIA Studio delivers.

To celebrate, NVIDIA is kicking off the #MadeInMachinima contest. Artists can remix iconic characters from Squad, Mount & Blade II: Bannerlord and Mechwarrior 5 into a cinematic short in Omniverse Machinima to win NVIDIA Studio laptops. The submission window opens on March 29 and runs through June 27. Visit the contest landing page for details.

New NVIDIA RTX Laptop GPUs Unlock Endless Creative Possibilities

Professionals on the go have powerful new laptop GPUs to choose from, with faster speeds and larger memory options: RTX A5500, RTX A4500 , RTX A3000 12GB, RTX A2000 8GB and NVIDIA RTX A1000. These GPUs incorporate the latest RTX and Max-Q technology, are available in thin and light laptops, and deliver extraordinary performance.

New NVIDIA RTX laptop GPUs tackle creative workflows enabling creation from anywhere.

Our new flagship laptop GPU, the NVIDIA RTX A5500 with 16GB of memory, is capable of handling the most challenging 3D and video workloads; with up to double the rendering performance of the previous generation RTX 5000.

The most complex, advanced, creative workflows have met their match.

NVIDIA Studio Laptop Drop

Three extraordinary Studio laptops are available for purchase today.

The ASUS ProArt Studiobook 16 is capable of incredible performance, and is configurable with a wide-range of professional and consumer GPUs. It’s rich with creative features: certified color-accurate 16-inch 120 Hz 3.2K OLED wide-view 16:10 display, a three-button touchpad for 3D designers, ASUS dial for video editing and an enlarged touchpad for stylus support.

MSI’s Creator Z16P and Z17 sport an elegant and minimalist design, featuring up to an NVIDIA RTX 3080 Ti or RTX A5500 GPU, and boast a factory-calibrated True Pixel display with QHD+ resolution and 100 percent DCI-P3 color.

NVIDIA Studio laptops are tested and validated for maximum performance and reliability. They feature the latest NVIDIA technologies that deliver real-time ray tracing, AI-enhanced features and time-saving rendering capabilities. These laptops have access to the exclusive Studio suite of software — including best-in-class Studio Drivers, NVIDIA Omniverse, Canvas, Broadcast and more.

In the weeks ahead, ASUS and GIGABYTE will make it even easier for new laptop owners to enjoy one of the Studio benefits. Upgraded livestreams, voice chats and video calls — powered by AI — will be available immediately with the NVIDIA Broadcast app preinstalled in their Pro Art and AERO product lines.

To Omniverse and Beyond

New Omniverse Connections are expanding the ecosystem and are now available in beta: Unreal Engine 5 Omniverse Connector and the Adobe Substance 3D Material Extension, with the Adobe Substance 3D Painter Omniverse Connector very close behind, allowing users to enjoy seamless, live-edit texture and material workflows.

Maxon’s Cinema4D now supports USD and is compatible with OmniDrive, unlocking Omniverse workflows for visualization specialists.

Artists can now use Pixar HD Storm, Chaos V-Ray, Maxon Redshift and OTOY Octane renderers within the viewport of all Omniverse apps, with Blender Cycles coming soon. Be it refining 3D scenes or exporting final projects, artists can switch between the lightning-fast Omniverse RTX Renderer, or their preferred renderer with advantageous features.

The Junk Shop by Alex Treviño. Original Concept by Anaïs Maamar. Note Hydra render delegates displayed in the renderer toggle menu.

CAD designers can now directly import 26 popular CAD formats into Omniverse USD scenes.

The integration of NVIDIA Maxine’s body pose estimation feature in the Omniverse Machinima app gives users the ability to track and capture motion in real time using a single camera — without requiring a MoCap suit — with live conversion from a 2D camera capture to a 3D model.

Read more about Omniverse for content creators here.

And if you haven’t downloaded Omniverse, now’s the time.

Your Canvas, Never Out of Style

Styles in Canvas — preset filters that modify the look and feel of the painting — can now be modified in up to 10 different variations.

More style variations enhance artist creativity while providing additional options within the theme of their selected style.

Check out style variations; and if you haven’t already, download Canvas, which is free for RTX owners.

3D Creative App Updates Backed by March NVIDIA Studio Driver

In addition to supporting the latest updates for NVIDIA Omniverse and NVIDIA Canvas, the March Studio Driver also supports a host of other recent creative app and renderer updates.

The highly anticipated Blender 3.1 update adds USD preview surface material export support, making it easier to move assets between USD-supported apps, including Omniverse.

Blender artists equipped with NVIDIA RTX GPUs maintain performance advantages over Mac. Midrange GeForce RTX 3060 Studio laptops deliver 3.5x faster rendering than the fastest M1 Max Macbooks per Blender’s benchmark testing.

Performance testing conducted by NVIDIA in March 2022 with Intel Core i9-12900HK, 32GB RAM and MacBook Pro 16 with M1 Max, 32GB RAM. NVIDIA Driver 511.97.

Luxion Keyshot 11 brings several updates: GPU-accelerated 3D paint features, physical simulation using NVIDIA PhysX, and NVIDIA Optix shader enhancements, speeding up animation workflows by up to 3x.

GPU Audio Inc., with an eye on the future, taps into parallel processing power for audio solutions, introducing an NVIDIA GPU-based VST filter to remove extreme frequencies and improve sound quality — an audio production game changer.

Download the March Studio Driver today.

On-Demand Sessions for Creators

Join the first GTC breakout session dedicated to the creative community.

NVIDIA Studio and Omniverse for the Next Era of Creativity” will include artists and directors from NVIDIA’s creative team. Network with fellow 3D artists and get Omniverse feature support to enhance 3D workflows. Join this free session on Wednesday, March 23, from 7-8 a.m. Pacific.

It’s just one of many Omniverse sessions available to watch live or on demand, including the featured sessions below:

Themed GTC sessions and demos covering visual effects, virtual production and rendering, AI art galleries, and building and infrastructure design are also available to help realize your creative ambition.

Real or rendered?

Also this week, game artists, producers, developers and designers are coming together for the annual Game Developers Conference where NVIDIA launched Omniverse for Developers, providing a more collaborative environment for the creation of virtual worlds.

At GDC, NVIDIA sessions to assist content creators in the gaming industry will feature virtual worlds and AI, real-time ray tracing, and developer tools. Check out the complete list.

To boost your creativity throughout the year, follow NVIDIA Studio on Facebook, Twitter and Instagram. There you’ll find the latest information on creative app updates, new Studio apps, creator contests and more. Get updates directly to your inbox by subscribing to the Studio newsletter.

The post At GTC: NVIDIA RTX Professional Laptop GPUs Debut, New NVIDIA Studio Laptops, a Massive Omniverse Upgrade and NVIDIA Canvas Update appeared first on NVIDIA Blog.

Read More

Keynote Wrap Up: Turning Data Centers into ‘AI Factories,’ NVIDIA CEO Intros Hopper Architecture, H100 GPU, New Supercomputers, Software

Promising to transform trillion-dollar industries and address the “grand challenges” of our time, NVIDIA founder and CEO Jensen Huang Tuesday shared a vision of an era where intelligence is created on an industrial scale and woven into real and virtual worlds.

Kicking off NVIDIA’s GTC conference, Huang introduced new silicon — including the new Hopper GPU architecture and new H100 GPU, new AI and accelerated computing software and powerful new data-center-scale systems.

”Companies are processing, refining their data, making AI software, becoming intelligence manufacturers,” Huang said, speaking from a virtual environment in the NVIDIA Omniverse real-time 3D collaboration and simulation platform as he described how AI is “racing in every direction.”

And all of it will be brought together by Omniverse to speed collaboration between people and AIs, better model and understand the real world, and serve as a proving ground for new kinds of robots, “the next wave of AI.”

Huang shared his vision with a gathering that has become one of the world’s most important AI conferences, bringing together leading developers, scientists and researchers.

The conference features more 1,600 speakers including from companies such as American Express, DoorDash, LinkedIn, Pinterest, Salesforce, ServiceNow, Snap and Visa, as well as 200,000 registered attendees.

Huang’s presentation began with a spectacular flythrough of NVIDIA’s new campus, rendered in Omniverse, including buzzing labs working on advanced robotics projects.

He shared how the company’s work with the broader ecosystem is saving lives by advancing healthcare and drug discovery, and even helping save our planet.

“Scientists predict that a supercomputer a billion times larger than today’s is needed to effectively simulate regional climate change,” Huang said.

“NVIDIA is going to tackle this grand challenge with our Earth-2, the world’s first AI digital twin supercomputer, and invent new AI and computing technologies to give us a billion-X before it’s too late,” he said.

New Silicon — NVIDIA H100: A “New Engine of the World’s AI Infrastructure”

To power these ambitious efforts, Huang introduced the NVIDIA H100 built on the Hopper architecture, as the “new engine of the world’s AI infrastructures.”

AI applications like speech, conversation, customer service and recommenders are driving fundamental changes in data center design, he said.

“AI data centers process mountains of continuous data to train and refine AI models,” Huang said. “Raw data comes in, is refined, and intelligence goes out — companies are manufacturing intelligence and operating giant AI factories.”

The factory operation is 24/7 and intense, Huang said. Minor improvements in quality drive a significant increase in customer engagement and company profits, Huang explained.

H100 will help these factories move faster. The “massive” 80 billion transistor chip uses TSMC’s 4N process.

“Hopper H100 is the biggest generational leap ever — 9x at-scale training performance over A100 and 30x large-language-model inference throughput,” Huang said.

 

Hopper is packed with technical breakthroughs, including a new Transformer Engine to speed up these networks 6x without losing accuracy.

“Transformer model training can be reduced from weeks to days” Huang said.

H100 is in production, with availability starting in Q3, Huang announced.

Huang also announced the Grace CPU Superchip, NVIDIA’s first discrete data center CPU for high-performance computing.

It comprises two CPU chips connected over a 900 gigabytes per second NVLink chip-to-chip interconnect to make a 144-core CPU with 1 terabyte per second of memory bandwidth, Huang explained.

“Grace is the ideal CPU for the world’s AI infrastructures,” Huang said.

Huang also announced new Hopper GPU-based AI supercomputers — DGX H100, H100 DGX POD and DGX SuperPOD.

To connect it all, NVIDIA’s new NVLink high-speed interconnect technology will be coming to all future NVIDIA chips — CPUs, GPUs, DPUs and SOCs, Huang said.

He also announced NVIDIA will make NVLink available to customers and partners to build companion chips.

“NVLink opens a new world of opportunities for customers to build semi-custom chips and systems that leverage NVIDIA’s platforms and ecosystems,” Huang said.

New Software — AI Has “Fundamentally Changed” Software

Thanks to acceleration unleashed by accelerated computing, the progress of AI is “stunning,” Huang declared.

“AI has fundamentally changed what software can make and how you make software,” Huang said.

Transformers, Huang explained, have opened self-supervised learning and unblocked the need for human-labeled data. As a result, Transformers are being unleashed in a growing array of fields.

“Transformers made self-supervised learning possible, and AI jumped to warp speed,” Huang said.

Google BERT for language understanding, NVIDIA MegaMolBART for drug discovery, and DeepMind AlphaFold2 are all breakthroughs traced to Transformers, Huang said.

Huang walked through new deep learning models for natural language understanding, physics, creative design, character animation and even — with NVCell — chip layout.

“AI is racing in every direction — new architectures, new learning strategies, larger and more robust models, new science, new applications, new industries — all at the same time,” Huang said.

NVIDIA is “all hands on deck” to speed new breakthroughs in AI and speed the adoption of AI and machine learning to every industry, Huang said.

The NVIDIA AI platform is getting major updates, Huang said, including Triton Inference Server, the NeMo Megatron 0.9 framework for training large language models, and the Maxine framework for audio and video quality enhancement.

The platform includes NVIDIA AI Enterprise 2.0, an end-to-end, cloud-native suite of AI and data analytics tools and frameworks, optimized and certified by NVIDIA and now supported across every major data center and cloud platform.

“We updated 60 SDKs at this GTC,” Huang said. “For our 3 million developers, scientists and AI researchers, and tens of thousands of startups and enterprises, the same NVIDIA systems you run just got faster.”

NVIDIA AI software and accelerated computing SDKs are now relied on by some of the world’s largest companies.

“NVIDIA SDKs serve healthcare, energy, transportation, retail, finance, media and entertainment — a combined $100 trillion of industries,” Huang said.

‘The Next Evolution’: Omniverse for Virtual Worlds

Half a century ago, the Apollo 13 lunar mission ran into trouble. To save the crew, Huang said, NASA engineers created a model of the crew capsule back on Earth to “work the problem.”

“Extended to vast scales, a digital twin is a virtual world that’s connected to the physical world,” Huang said. “And in the context of the internet, it is the next evolution.”

NVIDIA Omniverse software for building digital twins, and new data-center-scale NVIDIA OVX systems, will be integral for “action-oriented AI.”

“Omniverse is central to our robotics platforms,” Huang said, announcing new releases and updates for Omniverse. “And like NASA and Amazon, we and our customers in robotics and industrial automation realize the importance of digital twins and Omniverse.”

OVX will run Omniverse digital twins for large-scale simulations with multiple autonomous systems operating in the same space-time, Huang explained.

The backbone of OVX is its networking fabric, Huang said, announcing the NVIDIA Spectrum-4 high-performance data networking infrastructure platform.

The world’s first 400Gbps end-to-end networking platform, NVIDIA Spectrum-4 consists of the Spectrum-4 switch family, NVIDIA ConnectX-7 SmartNIC, NVIDIA BlueField-3 DPU and NVIDIA DOCA data center infrastructure software.

And to make Omniverse accessible to even more users, Huang announced Omniverse Cloud. Now, with just a few clicks, collaborators can connect through Omniverse on the cloud.

Huang showed how this works with a demo of four designers, one an AI, collaborating to build a virtual world.

He also showed how Amazon uses Omniverse Enterprise “to design and optimize their incredible fulfillment center operations.”

“Modern fulfillment centers are evolving into technical marvels — facilities operated by humans and robots working together,” Huang said.

The ‘Next Wave of AI’: Robots and Autonomous Vehicles

New silicon, new software and new simulation capabilities will unleash “the next wave of AI,” Huang said, robots able to “devise, plan and act.”

NVIDIA Avatar, DRIVE, Metropolis, Isaac and Holoscan are robotics platforms built end to end and full stack around “four pillars”: ground-truth data generation, AI model training, the robotics stack and Omniverse digital twins, Huang explained.

The NVIDIA DRIVE autonomous vehicle system is essentially an “AI chauffeur,” Huang said.

And Hyperion 8 — NVIDIA’s hardware architecture for self-driving cars on which NVIDIA DRIVE is built — can achieve full self-driving with a 360-degree camera, radar, lidar and ultrasonic sensor suite.

Hyperion 8 will ship in Mercedes-Benz cars starting in 2024, followed by Jaguar Land Rover in 2025, Huang said.

Huang announced that NVIDIA Orin, a centralized AV and AI computer that acts as the engine of new-generation EVs, robotaxis, shuttles, and trucks started shipping this month.

And Huang announced Hyperion 9, featuring the coming DRIVE Atlan SoC for double the performance of the current DRIVE Orin-based architecture, which will ship starting in 2026.

BYD, the second-largest EV maker globally, will adopt the DRIVE Orin computer for cars starting production in the first half of 2023, Huang announced.

And Lucid Motors revealed that its DreamDrive Pro advanced driver-assistance system is built on NVIDIA DRIVE.

Overall, NVIDIA’s automotive pipeline has increased to over $11 billion over the next six years.

Clara Holoscan puts much of the real-time computing muscle used in DRIVE to work supporting medical instruments and real-time sensors, such as RF ultrasound, 4K surgical video, high-throughput cameras and lasers.

Huang showed a video of Holoscan accelerating images from a light-sheet microscope — which creates a “movie” of cells moving and dividing.

It typically takes an entire day to process the 3TB of data these instruments produce in an hour.

At the Advanced Bioimaging Center at UC Berkeley, however, researchers using Holoscan are able to process this data in real-time, enabling them to auto-focus the microscope while experiments are running.

Holoscan development platforms are available for early access customers today, generally available in May, and medical-grade readiness in the first quarter of 2023.

NVIDIA is also working with thousands of customers and developers who are building robots for manufacturing, retail, healthcare, agriculture, construction, airports and entire cities, Huang said.

NVIDIA’s robotics platforms consist of Metropolis and Isaac — Metropolis is a stationary robot tracking moving things, while Isaac is a platform for things that move, Huang explained.

To help robots navigate indoor spaces — like factories and warehouses — NVIDIA announced Isaac Nova Orin, built on Jetson AGX Orin, a state-of-the-art compute and sensor reference platform to accelerate autonomous mobile robot development and deployment.

In a video, Huang showed how PepsiCo uses Metropolis and an Omniverse digital twin together.

Four Layers, Five Dynamics

Huang ended by tying all the technologies, product announcements and demos back into a vision of how NVIDIA will drive forward the next generation of computing.

NVIDIA announced new products across its four-layer stack: hardware, system software and libraries, software platforms NVIDIA HPC, NVIDIA AI, and NVIDIA Omniverse; and AI and robotics application frameworks, Huang explained.

Huang also ticked through the five dynamics shaping the industry: million-X computing speedups, transformers turbocharging AI, data centers becoming AI factories, which is exponentially increasing demand for robotics systems, and digital twins for the next era of AI.

“Accelerating across the full stack and at data center scale, we will strive for yet another million-X in the next decade,” Huang said, concluding his talk. “I can’t wait to see what the next million-X brings.”

Noting that Omniverse generated “every rendering and simulation you saw today,” Huang then introduced a stunning video put together by NVIDIA’s creative team featuring viewers “on one more trip into Omniverse” for a surprising musical jazz number set in the heart of NVIDIA’s campus featuring a cameo from Huang’s digital counterpart, Toy Jensen.

The post Keynote Wrap Up: Turning Data Centers into ‘AI Factories,’ NVIDIA CEO Intros Hopper Architecture, H100 GPU, New Supercomputers, Software appeared first on NVIDIA Blog.

Read More

Unlimited Data, Unlimited Possibilities: UF Health and NVIDIA Build World’s Largest Clinical Language Generator

The University of Florida’s academic health center, UF Health, has teamed up with NVIDIA to develop a neural network that generates synthetic clinical data — a powerful resource that researchers can use to train other AI models in healthcare.

Trained on a decade of data representing more than 2 million patients, SynGatorTron is a language model that can create synthetic patient profiles that mimic the health records it’s learned from. The 5 billion-parameter model is the largest language generator in healthcare.

“Synthetic data isn’t actually linked to a real human being, but it has similar characteristics to real patients,” said Dr. Duane Mitchell, an assistant vice president for research and director of the UF Clinical and Translational Science Institute. “SynGatorTron can, for example, create health records of digital diabetes patients that have features just like a real population.”

Using this synthetic data, researchers can create tools, models and tasks without risks or privacy concerns. These can then be used on real data to ask clinical questions, look for associations and even explore patient outcomes.

Working with synthetic data also makes it easier for different research institutions to collaborate and share models. And since the amount of data that can be synthesized is virtually limitless, researchers can use SynGatorTron-generated data to augment small datasets of rare disease patients or minority populations to reduce model bias.

SynGatorTron was developed using the open-source NVIDIA Megatron-LM and NeMo frameworks. It’s based on UF Health’s GatorTron model, announced last year at NVIDIA GTC. The models were trained on HiPerGator-AI, the university’s in-house NVIDIA DGX SuperPOD system, which ranks among the world’s top 30 supercomputers.

GatorTron-S, a BERT-style transformer model trained on synthetic data generated by SynGatorTron, will be available for developers next month on the NGC software hub. 

SynGatorTron Opens Gate to Robust Training Data

To a doctor, an AI-generated doctor’s note can appear impractical at first glance — it doesn’t represent a real patient and won’t read as logical to an expert eye. So a clinician can’t make a direct analysis or diagnosis from it. But to an untrained AI, real and synthetic clinical data are both highly valuable.

“SynGatorTron’s generative capability is a great enabler of natural language processing for medicine,” said Dr. Mona Flores, global head of medical AI at NVIDIA. “Synthesizing different types of clinical records will democratize the ability to create all sorts of applications dependent on such data by addressing data sparsity and privacy.”

Once it’s available, research institutions outside UF Health could fine-tune the pretrained SynGatorTron model with their own localized data and apply it to their AI projects. For example, if a given condition or a patient population is underrepresented in a health system’s clinical data, SynGatorTron can be prompted to generate additional data with characteristics of that disease or population.

These AI-generated records could then be used to supplement and balance out real healthcare datasets used to train other neural networks, so that they better represent the population.

Since synthetic training datasets mimic real medical notes without being associated with specific patients, they can also be more readily shared across research institutions without raising privacy concerns.

“When you have the ability to mimic population characteristics without being tethered to real patients, it opens the imagination to see if we can generate realistic datasets that allow us to answer questions we couldn’t otherwise, due to constraints on access to data or limited information on patients of interest,” Mitchell said.

One potential application is in clinical trials, which often divide patients into treatment and control groups to measure the effectiveness of a new medication. An application derived from SynGatorTron-generated data could parse through real records and create a digital twin of patient records. These records could then be used as the control group in a clinical trial, instead of having a control group derived by giving real patients a placebo treatment.

Researchers developing a deep learning model to study a rare disease, or the effects of a treatment on a specific population, could also use SynGatorTron for data augmentation, generating more training data to supplement the limited amount of real medical records available.

Healthcare at GTC 

Register free for GTC, running online March 21-24, to discover the latest in AI and healthcare. Hear from SynGatorTron collaborators in the session “A Next-Generation Clinical Language Model,” taking place March 23 at 7 a.m. Pacific.

Watch the replay of NVIDIA founder and CEO Jensen Huang’s keynote address below:

The post Unlimited Data, Unlimited Possibilities: UF Health and NVIDIA Build World’s Largest Clinical Language Generator appeared first on NVIDIA Blog.

Read More

New NVIDIA RTX GPUs Tackle Demanding Professional Workflows and Hybrid Work, Enabling Creation From Anywhere

Remote work and hybrid workplaces are the new normal for professionals in many industries. Teams spread throughout the world are expected to create and collaborate while maintaining top productivity and performance.

Businesses use the NVIDIA RTX platform to enable their workers to keep up with the most demanding workloads, from anywhere. And today, NVIDIA is expanding its RTX offerings with seven new NVIDIA Ampere architecture GPUs for laptops and desktops.

The new NVIDIA RTX A500, RTX A1000, RTX A2000 8GB, RTX A3000 12GB, RTX A4500 and RTX A5500​ laptop GPUs expand access to AI and ray-tracing technology, delivering breakthrough performance no matter where you work. The laptops include the latest RTX and Max-Q technology, giving professionals the ability to take their workflows to the next level.

The new NVIDIA RTX A5500 desktop GPU combines the latest-generation RT Cores, Tensor Cores and CUDA cores with 24GB of memory for incredible rendering, AI, graphics and compute performance. Its ray-traced rendering is 2x faster than the previous generation and its motion blur rendering performance is up to 9x faster.

With the new NVIDIA RTX GPUs, artists can create photorealistic immersive digital experiences, scientists can make the latest groundbreaking discoveries, and engineers can develop innovative technology to advance us into the future.

Customer Adoption

Among the first to use the NVIDIA RTX A5500 is M4 Engineering, a leading aerospace engineering firm focused on conceptual aircraft design, analysis and development.

“The multi-app product development workflows we use at M4 are well-served by the NVIDIA RTX A5500 and its 24GB of memory,” said Brian Rotty, senior engineer at M4 Engineering. “My team can handle larger CAD and CAE datasets than before and, critically, we can interact with and iterate these larger datasets simultaneously by making use of the extra GPU memory headroom and compute capabilities of this new card.”

“The performance we get with the NVIDIA RTX A5500 is unprecedented,” said Hiram Rodriguez, design technology specialist at AS+GG Architecture. “Previously, our point cloud processing took too long and created a bottleneck for designing and analyzing current site conditions. Using the NVIDIA RTX A5500, in less than 20 minutes we can integrate a fully processed, geolocated, classified point cloud into our 3D models.”

“The new NVIDIA RTX A5500 gives us the ability to load highly detailed environments, while maintaining high frame rates and smooth camera motion to explore and develop the scenes,” said Yiotis Katsambas, executive director of Technology at Sony Pictures Animation. “By combining the A5500 with NVIDIA Omniverse Enterprise and our own FlixiVerse software, our artists and directors can immerse themselves in our virtual worlds and collaborate in real-time.”

Next-Generation RTX Technology

The new NVIDIA RTX GPUs feature the latest generation of NVIDIA RTX technology to accelerate graphics, AI and ray-tracing performance.

These GPUs are part of the comprehensive NVIDIA RTX platform, which includes NVIDIA GPU-accelerated software development kits, toolkits, frameworks and enterprise management tools.

These GPUs also take advantage of the accelerations of the NVIDIA Studio platform, including optimizations to leverage RTX hardware in 75 of the top creative apps, exclusive tools like NVIDIA Broadcast and Canvas, and the advanced real-time 3D design and collaboration platform, NVIDIA Omniverse.

NVIDIA RTX GPUs are certified by independent software vendors of over 50+ professional applications. Certification provides users with a reliable and dependable graphics and computing experience through testing and development.

NVIDIA RTX laptop GPUs include: 

  • The latest NVIDIA RTX technology: 2nd gen RT Cores, 3rd gen Tensor Cores and NVIDIA Ampere architecture streaming multiprocessor, which provide up to 2x the throughput of the previous-generation architecture to tackle demanding rendering, ray tracing and AI workflows from anywhere.
  • NVIDIA Max-Q technology: AI-based system optimizations make thin and light laptops perform quieter, faster and more efficiently with Dynamic Boost, CPU Optimizer, Rapid Core Scaling, WhisperMode, Battery Boost, Resizable BAR and NVIDIA DLSS technology.
  • Up to 16GB of GPU memory: For the largest models, scenes and assemblies. The RTX A2000 8GB, RTX A3000 12GB and RTX A4500 now feature 2x the memory of their previous-generation counterparts for working with larger models, datasets and multi-app workflows.
  • Rich suite of NVIDIA software technology: Users can tap into unique benefits ranging from tetherless VR to collaborative 3D design with a wide variety of software tools, including NVIDIA CloudXR, NVIDIA Omniverse, NVIDIA Canvas, NVIDIA Broadcast, NVIDIA NGC, NVIDIA RTX Experience and more.

The NVIDIA RTX A5500 features the latest technologies in the NVIDIA Ampere architecture:

  • Second-generation RT Cores: Up to 2x the throughput of the first generation with the ability to run concurrent ray tracing, shading and denoising tasks.
  • Third-generation Tensor Cores: Up to 12x the training throughput over the previous generation, with support for new TF32 and Bfloat16 data formats.
  • CUDA Cores: Up to 3x the single-precision floating point throughput over the previous generation.
  • Up to 48GB of GPU memory: The RTX A5500 features 24GB of GDDR6 memory with ECC (error correction code). The RTX A5500 is expandable up to 48GB of memory using NVIDIA NVLink to connect two GPUs.
  • Virtualization: The RTX A5500 supports NVIDIA RTX Virtual Workstation (vWS) software for multiple high-performance virtual workstation instances that enable remote users to share resources to drive high-end design, AI and compute workloads.
  • PCIe Gen 4: Doubles the bandwidth of the previous generation and speeds up data transfers for data-intensive tasks such as AI, data science and creating 3D models.

Availability 

The new NVIDIA RTX A5500 desktop GPUs are available today from channel partners and through global system builders starting in Q2.

The new NVIDIA RTX laptop GPUs will be available in mobile workstations from global OEM partners including Acer, ASUS, BOXX Technologies, Dell Technologies, HP, Lenovo, and MSI starting this spring. Contact these partners for specific system configuration details and availability.

To learn more about NVIDIA RTX, watch the GTC 2022 keynote from Jensen Huang. Register for GTC 2022 for free to attend sessions with NVIDIA and industry leaders.

The post New NVIDIA RTX GPUs Tackle Demanding Professional Workflows and Hybrid Work, Enabling Creation From Anywhere appeared first on NVIDIA Blog.

Read More

First Wave of Startups Harnesses UK’s Most Powerful Supercomputer to Power Digital Biology Breakthroughs

Four NVIDIA Inception members have been selected as the first cohort of startups to access Cambridge-1, the U.K.’s most powerful supercomputer.

The system will help British companies Alchemab Therapeutics, InstaDeep, Peptone and Relation Therapeutics enable breakthroughs in digital biology.

Officially launched in July, Cambridge-1 — an NVIDIA DGX SuperPOD cluster powered by NVIDIA DGX A100 systems, BlueField-2 DPUs and NVIDIA InfiniBand networking — brings together NVIDIA’s decades-long work in accelerated computing, AI and life sciences. Located between London and Cambridge, it ranks among the world’s top 50 fastest computers and is powered by 100 percent renewable energy.

The supercomputer’s five founding partners have already been using it to advance healthcare, using AI to research brain diseases like dementia, design new drugs and more.

Now, the four startups are preparing to use Cambridge-1 to accelerate drug discovery, genome sequencing and disease research.

Each is a member of NVIDIA Inception, a free program that nurtures startups revolutionizing industries with cutting-edge technology. Inception gives members a custom set of ongoing benefits, such as NVIDIA Deep Learning Institute credits, marketing support and technology assistance from experts.

Alchemab Therapeutics: Discovering Antibody Therapeutics

Alchemab Therapeutics is identifying novel drug targets and therapeutics, and building patient stratification tools, with an initial focus on neurodegenerative conditions and cancer.

The company’s antibody drug discovery engine is being built on “nature’s most effective search engine: adaptive immunity,” according to Jake Galson, head of technology at Alchemab. This is the type of immunity developed when a person’s immune system responds to a pathogenic protein, such as those produced by cancers, or a foreign microorganism, like after an infection.

Alchemab’s platform sequences B-cells, which produce antibodies that fight disease, and computationally analyzes antibody responses among individuals who are susceptible but resilient to certain diseases.

“Approximately 10 trillion human antibody variants are possible, and having access to Cambridge-1 gives us a unique opportunity to learn meaningful representations from such an enormous body of data,” Galson said. “This will increase our understanding of antibody structure and function, and ultimately contribute to the discovery and development of novel therapeutics.”

Attend Alchemab’s session on deciphering the language of antibodies at GTC, a global AI conference running through March 24.

InstaDeep: Creating Decision-Making Systems for Biology

InstaDeep, an Elite member of the NVIDIA Partner Network, delivers AI-powered decision-making systems for the development of next-generation vaccines and therapeutics.

The company is looking to train a large AI language model using genomics data — and share the model with healthcare researchers to use for protein design and molecular dynamics simulations.

“There are over 12 billion nucleotide sequences from 450,000 species that are publicly available,” said Karim Beguir, co-founder and CEO of InstaDeep. “Researchers and life science professionals could benefit tremendously from a large-scale model providing data-driven insights from genome sequencing.”

Access to Cambridge-1, Beguir said, will enable InstaDeep to significantly scale the startup’s “compute capabilities and ambitions, and tackle exciting challenges in the development of novel treatments for patients.”

Learn more about how AI language models are applied in biology at InstaDeep’s GTC session on revolutionizing protein research with high performance computing.

Peptone: Providing Insight About Disordered Proteins

Peptone, a startup that received early approval to access Cambridge-1 last fall, is developing a physics engine called Oppenheimer, which will help deliver precise structural insights about intrinsically disordered proteins (IDPs), or proteins that lack a fixed 3D structure.

Diseases that stem from IDPs are usually difficult to treat, but Cambridge-1 will give Peptone the power to potentially “transform a typically undruggable IDP into a plausible drugging target,” said Kamil Tamiola, founder and CEO of Peptone.

“The supercomputer will enable us to perform high-throughput inference on millions of proteins in parallel and in a matter of hours,” Tamiola said. “Oppenheimer integrates advanced atomistic biophysical experiments with a next-generation supercomputing stack built on NVIDIA DGX A100 systems.”

Ultimately, the company will use the calculations to develop a proprietary and first-in-class line of drugs targeting selected IDPs.

Relation Therapeutics: Mapping the Causes of Disease

Another startup, Relation Therapeutics, combines single-cell profiling, human genetics, functional genomics and machine learning to better understand human biology.

RelationTx uses graph-based recommender system technologies to reveal causal relationships in diseases. RelationTx’s platform can identify the areas of biology to focus on for drug discovery and accelerate research efforts for diseases that have not yet been widely studied.

The company aims to transform how drug discovery and development is conducted, leading to new treatments for disease, according to Lindsay Edwards, chief technology officer at RelationTx.

“Ultimately, our mission is to get new medicines to patients who need them, faster and more efficiently than the current paradigm,” Edwards said. “Access to Cambridge-1 opens up areas of biology that were almost impossible to understand before, such as how genetic variation affects gene expression in inaccessible complex tissues and organ systems.”

Learn More About AI in Healthcare

Groundbreaking work in digital biology is to come from these startups — and Cambridge-1’s founding companies are already harnessing its power.

Using the supercomputer, AstraZeneca and NVIDIA developed the latest iteration of MegaMolBART, a natural language processing model that reads the text format of chemical compounds and uses AI to generate new molecules. The transformer chemistry model is capable of training chemical language models with over 1 billion parameters using the NVIDIA NeMo Megatron framework.

Learn more about AI-based innovation at GTC, where Kimberly Powell, vice president of healthcare at NVIDIA, will discuss how researchers, developers and medical device makers use the NVIDIA Clara platform to create breakthroughs in healthcare and drug discovery.

Watch NVIDIA founder and CEO Jensen Huang’s GTC keynote address.

Subscribe to NVIDIA healthcare news.

The post First Wave of Startups Harnesses UK’s Most Powerful Supercomputer to Power Digital Biology Breakthroughs appeared first on NVIDIA Blog.

Read More

NVIDIA Omniverse Ecosystem Expands 10x, Amid New Features and Services for Developers, Enterprises and Creators

When it comes to creating and connecting virtual worlds, over 150,000 individuals have downloaded NVIDIA Omniverse to make huge leaps in transforming 3D design workflows and achieve new heights of real-time, physically accurate simulations.

At GTC, NVIDIA today announced new releases and updates for Omniverse — including the latest Omniverse Connectors and libraries — expanding the platform’s ecosystem by 10x, and making Omniverse even more accessible to creators, developers, designers, engineers and researchers worldwide.

NVIDIA Omniverse Enterprise is helping leading companies enhance their pipelines and creative workflows. New Omniverse Enterprise customers include Amazon, DB Netze, DNEG, Kroger, Lowe’s and PepsiCo, which are all using the platform to build physically accurate digital twins or develop realistic immersive experiences for customers.

Enhancing Content Creation With New Connections and Libraries

The Omniverse ecosystem is expanding beyond design and content creation. In one year, Omniverse connections, ways to connect or integrate with the Omniverse platform, have grown 10x — with 82 connections through the extended Omniverse ecosystem.

  • New Third-Party Connections for Adobe Substance 3D Material Extension and Painter Connector, Epic Games Unreal Engine Connector and Maxon Cinema 4D will enable live-sync workflows between third-party apps and Omniverse.
  • New CAD Importers: These convert 26 common CAD formats to Universal Scene Description (USD) to better enable manufacturing and product design workflows within Omniverse.
  • New Asset Library Integrations: TurboSquid by Shutterstock, Sketchfab and Reallusion ActorCore assets are now directly available within Omniverse Apps asset browsers so users can simply search, drag and drop from close to 1 million Omniverse-ready 3D assets. New Omniverse-ready 3D assets, materials, textures, avatars and animations are also now available from A23D.
  • New Hydra Render Delegate Support: Users can integrate and toggle between their favorite Hydra delegate-supported renderers and the Omniverse RTX Renderer directly within Omniverse Apps. Now available in beta for Chaos V-Ray, Maxon Redshift and OTOY Octane, with Blender Cycles, Autodesk Arnold coming soon.
Chaos V-Ray Hydra Render Delegate in NVIDIA Omniverse.

There are also new connections to industrial automation and digital twin software developers. Bentley Systems, the infrastructure engineering software company, announced the availability of LumenRT for NVIDIA Omniverse, powered by Bentley iTwin. It brings engineering-grade, industrial-scale real-time physically accurate visualization to nearly 39,000 Bentley System customers worldwide. Ipolog, a developer of factory, logistics and planning software, released three new connections to the platform. This, coupled with the growing Isaac Sim robotics ecosystem, allows customers such as BMW Group to better develop holistic digital twins.

LumenRT for NVIDIA Omniverse powered by Bentley iTwin.

Omniverse Enterprise Features and Availability Broadens

New updates are coming soon to Omniverse Enterprise, including the latest releases of Omniverse Kit 103, Omniverse Create and View 2022.1, Omniverse Farm, and DeepSearch.

Omniverse Enterprise on NVIDIA LaunchPad is now available across nine global regions. NVIDIA LaunchPad gives design practitioners and project reviewers instant, free turnkey access to hands-on Omniverse Enterprise labs, helping them make quicker, more confident software and infrastructure decisions.

Customers Drive Innovations With Omniverse Enterprise

Amazon has over 200 robotics facilities that handle millions of packages each day. It’s a complex operation that requires over half a million mobile drive robots to support warehouse logistics. Using Omniverse Enterprise and Isaac Sim, Amazon Robotics is building AI-enabled digital twins of its warehouses to better optimize warehouse design and flow, and train more intelligent robotic solutions.

PepsiCo is looking at using Omniverse Enterprise and Metropolis-powered digital twins to improve the efficiency and environmental sustainability of its supply chain of over 600 distribution centers in 200 regional markets.

“NVIDIA Omniverse will help us better streamline supply chain operations and reduce energy usage and waste, while advancing our mission toward sustainability,” said Qi Wang, vice president of Research and Development at PepsiCo. “When we’re looking at new products and processes, we will use digital twins to simulate and test models and environments in real time before applying changes to the physical distribution centers.”

Lowe’s Innovation Labs is exploring how Omniverse can help unlock the next generation of its stores. It is using the platform to push the boundaries of what’s possible in digital twins, simulation and advanced tools that remove friction for customers and associates.

Kroger plans to use Omniverse to optimize store efficiency and processes with digital twin store simulation.

Raising the Bar on Industrial Digital Twins

At GTC, NVIDIA announced NVIDIA OVX, a computing system architecture designed to power large-scale digital twins. NVIDIA OVX is built to operate complex simulations that will run within Omniverse, enabling designers, engineers and planners to create physically accurate digital twins and massive, true-to-reality simulation environments.

Latest Omniverse Technologies and Features

Major new releases and capabilities announced for Omniverse include:

  • New Developer Tools: Omniverse Code, an app that serves as an integrated development environment for developers and powers users to easily build their own Omniverse extensions, apps or microservices.
  • DeepSearch: a new AI-based search service that lets users quickly search through massive, untagged 3D asset libraries using natural language or images. DeepSearch is available for Omniverse Enterprise customers in early access.
  • Omniverse Replicator: a framework for generating physically accurate 3D synthetic data to accelerate training and accuracy of perception networks — now available within Omniverse Code so developers can build their own domain-specific synthetic data engines.
  • OmniGraph, ActionGraph and AnimGraph: major new releases controlling behavior and animation.
  • Omniverse Avatar: a platform that uses AI and simulation technology to enable developers to build custom, intelligent, realistic avatars.
  • Omniverse XR app: a VR-optimized configuration of Omniverse View that enables users to experience their full-fidelity 3D scenes with full RTX ray tracing, at 1:1 scale, coming soon.
  • New versions of Omniverse Kit, Create, View and Machinima.

Read about the full release of new platform features.

To learn more about NVIDIA Omniverse, watch the GTC 2022 keynote from Jensen Huang. Register for GTC 2022 for free to attend sessions with NVIDIA and industry leaders.

The post NVIDIA Omniverse Ecosystem Expands 10x, Amid New Features and Services for Developers, Enterprises and Creators appeared first on NVIDIA Blog.

Read More

NVIDIA Unveils Isaac Nova Orin to Accelerate Development of Autonomous Mobile Robots

Next time socks, cereal or sandpaper shows up in hours delivered to your doorstep, consider the behind-the-scenes logistics acrobatics that help get them there so fast.

Order fulfillment is a massive industry of moving parts. Heavily supported by autonomous mobile robots (AMRs), warehouses can span 1 million square feet, expanding and reconfiguring to meet demands. It’s an obstacle course of workers and bottlenecks for hospitals, retailers, airports, manufacturers and others.

To accelerate development of these AMRs, we’ve introduced Isaac Nova Orin, a state-of-the-art compute and sensor reference platform. It’s built on the powerful new NVIDIA Jetson AGX Orin edge AI system, available today. The platform includes the latest sensor technologies and high-performance AI compute capability.

New Isaac Software Arrives for AMR Ecosystem

In addition to Nova Orin, which will be available later this year, we’re delivering new software and simulation capabilities to accelerate AMR deployments — including hardware-accelerated modules, or Isaac ROS GEMs, that are essential for enabling robots to visually navigate. That’s key for mobile robots to better perceive their environment to safely avoid obstacles and efficiently plan paths.

New simulation capabilities, available in the NVIDIA Isaac Sim April release, will help save time when building virtual environments to test and train AMRs. Using 3D building blocks, developers can rapidly create realistic complex warehouse scenes and configurations to validate the robot’s performance on a breadth of logistics tasks.

Isaac Nova Orin Key Features 

Nova Orin comes with all of the compute and sensor hardware needed to design, build and test autonomy in AMRs.

Its two Jetson AGX Orin units provide upto 550 TOPS of AI compute for perception, navigation and human-machine interaction. These modules process data in real time from the AMR’s central nervous system — essentially the sensor suite comprising up to six cameras, three lidars and eight ultrasonic sensors.

Nova Orin includes tools necessary to simulate the robot in Isaac Sim on Omniverse, as well as support for numerous ROS software modules designed to accelerate perception and navigation tasks. Tools are also provided for accurately mapping the robots’ environment using NVIDIA DeepMap.

The entire platform is calibrated and tested to work out of the box and give developers valuable time to innovate on new features and capabilities.

3D sensor field of Nova Orin

Enabling the Future

Much is at stake in intralogistics for AMRs, a market expected to top $46 billion by 2030, up from under $8 billion in 2021, according to estimates from ABI Research.

The old method of designing the AMR compute and sensor stack from the ground up is too costly in time and effort. Tapping into an existing platform allows manufacturers to focus on building the right software stack for the right robot application.

Improving productivity for factories and warehouses will depend on AMRs working safely and efficiently side by side at scale. High levels of autonomy driven by 3D perception from  Nova Orin will help drive that revolution.

As AMRs evolve, the need for secure deployment and management of the critical AI software on board is paramount. Over-the-air software management support is already preintegrated in Nova Orin.

Learn more about Nova Orin and the complete Isaac for AMR platform.

 

The post NVIDIA Unveils Isaac Nova Orin to Accelerate Development of Autonomous Mobile Robots appeared first on NVIDIA Blog.

Read More

Driving on Air: Lucid Group Builds Intelligent EVs on NVIDIA DRIVE

Lucid Group may be a newcomer to the electric vehicle market, but its entrance has been grand.

The electric automaker announced at GTC that its current and future fleets are built on NVIDIA DRIVE Hyperion for programmable, intelligent capabilities. By developing on the scalable, software-defined platform, Lucid ensures its vehicles are always at the cutting edge, receiving continuous improvements over the air.

The EV maker recently launched its first vehicle, the Lucid Air, late last year to widespread acclaim. The luxury sedan won MotorTrend’s 2022 Car of the Year, with industry-leading battery range and fast charging.

And Lucid isn’t stopping there — the automaker recently announced Project Gravity, a long-range electric SUV slated for launch in 2024.

A defining feature of the Lucid Air is the DreamDrive Pro advanced driver assistance system — standard on Dream Edition and Grand Touring trims, optional on other models — which leverages the high-performance compute of NVIDIA DRIVE to provide a seamless automated driving experience.

Future-Ready Intelligence

DreamDrive Pro is designed to continuously improve via over-the-air software updates, with the scalable and high-performance AI compute of NVIDIA DRIVE at the center of the system.

It uses a rich suite of 14 cameras, one lidar, five radars and 12 ultrasonics for robust automated driving and intelligent cockpit features.

In addition to a diversity of sensors, Lucid’s dual-rail power system and proprietary Ethernet Ring offer a high degree of redundancy for key systems, such as braking and steering.

“The seamless integration of the software-defined NVIDIA DRIVE platform provides a powerful basis for Lucid to further enhance what DreamDrive can do in the future — all of which can be delivered to vehicles over the air,” said Mike Bell, Senior Vice President of Digital, Lucid Group.

Together, Lucid and NVIDIA will support these intelligent vehicles, enhancing the customer experience with new functions throughout the life of the car.

A DreamDrive Come True

Lucid plans to build on its success in deploying industry-leading electric vehicles, continuing to develop on NVIDIA DRIVE for future generations.

By starting with a programmable, high-performance compute architecture in the Lucid Air, the automaker can take advantage of the scalability of NVIDIA DRIVE and always incorporate the latest AI technology as it expands with more models.

The ability to continuously deliver innovative and exciting features to its vehicles will have Lucid customers driving on air.

The post Driving on Air: Lucid Group Builds Intelligent EVs on NVIDIA DRIVE appeared first on NVIDIA Blog.

Read More