NVIDIA Omniverse Accelerates Game Content Creation With Generative AI Services and Game Engine Connectors

NVIDIA Omniverse Accelerates Game Content Creation With Generative AI Services and Game Engine Connectors

Powerful AI technologies are making a massive impact in 3D content creation and game development. Whether creating realistic characters that show emotion or turning simple texts into imagery, AI tools are becoming fundamental to developer workflows — and this is just the start.

At NVIDIA GTC and the Game Developers Conference (GDC), learn how the NVIDIA Omniverse platform for creating and operating metaverse applications is expanding with new Connectors and generative AI services for game developers.

Part of the excitement around generative AI is because of its ability to capture the creator’s intent. The technology learns the underlying patterns and structures of data, and uses that to generate new content, such as images, audio, code, text, 3D models and more.

Announced today, the NVIDIA AI Foundations cloud services enable users to build, refine and operate custom large language models (LLMs) and generative AI trained with their proprietary data for their domain-specific tasks.

And through NVIDIA Omniverse, developers can get their first taste of using generative AI technology to enhance game creation and accelerate development pipelines with the Omniverse Audio2Face app.

Accelerating 3D Content With Generative AI

Specialized generative AI tools can boost creator productivity, even for users who don’t have extensive technical skills. Anyone can use generative AI to bring their creative ideas to life, producing high-quality, highly iterative experiences — all in a fraction of the time and cost of traditional game development.

For example, NVIDIA Omniverse Avatar Cloud Engine (ACE) offers the fastest, most versatile solution for bringing interactive avatars to life at scale. Game developers could leverage ACE to seamlessly integrate NVIDIA AI into their applications, including NVIDIA Riva for creating expressive character voices using speech and translation AI, or Omniverse Audio2Face and Live Portrait for AI-powered 2D and 3D character animation.

Today, game developers are already taking advantage of Audio2Face, where artists are more efficiently animating secondary characters without a tedious manual process. The app’s latest release brings major quality, usability and performance updates, including headless mode and a REST API — enabling developers to run the app and process numerous audio files from multiple users in the data center.

Mandarin Chinese language support can now be previewed in Audio2Face, along with improved lip-sync quality, more robust multi-language support and a new pretrained female model. The world’s first fully real-time, ray-traced subsurface scattering shader is also demonstrated in the demo with Diana, a new digital human model.

GSC Game World, one of Europe’s leading game developers, is adopting Omniverse Audio2Face in its upcoming game, S.T.A.L.K.E.R. 2 Head of Chernobyl. Join the NVIDIA and GCS session at GDC to learn how developers are implementing generative AI technology in Omniverse.

A scene from “S.T.A.L.K.E.R. 2 Head of Chernobyl.”

Fallen Leaf, an indie game developer, is also using Omniverse Audio2Face for character facial animation in Fort Solis, a third-person sci-fi thriller game that takes place on Mars.

New generative AI services such as NVIDIA Picasso, announced at GTC, preview the future of building and deploying assets for game production pipelines. Omniverse is opening portals to enrich workflows with generative AI tools powered by NVIDIA and its partners, and the momentum around unifying the game asset pipeline is growing.

Unifying Game Asset Pipelines With Universal Scene Description

Based on the Universal Scene Description (USD) framework, NVIDIA Omniverse is the connecting fabric that helps creators and developers build interoperability between their favorite tools — like Autodesk Maya, Autodesk 3ds Max and Adobe Substance 3D Painter — or make their own custom applications.

And with USD — an open, extensible framework and ecosystem for composing, simulating and collaborating within 3D worlds — developers can achieve non-destructive, collaborative workflows when creating scenes, as well as simplify asset aggregation so content creation teams can iterate faster.

Image courtesy of Tencent Games.

Tencent Games is adopting USD workflows based on Omniverse to better streamline content creation pipelines. To create vast worlds in every level of a game, the artists at Tencent use design tools such as Autodesk Maya, SideFX Houdini and Unreal Engine to produce up to millions of trees, buildings and other properties to enrich their scenes. The technical artists often look to optimize their content creation pipelines to speed up this process, so they developed a proprietary Unreal Engine workflow powered by OmniObjects.

With USD, Tencent Games’ teams saw the opportunity to easily streamline and seamlessly connect their workflows. Building on Omniverse as the platform for developing USD workflows, the artists at Tencent no longer need to install plug-ins for each software they use. Using just one USD plug-in enables interoperability across all their favorite software tools. Learn more about Tencent Games by joining this session at GDC.

New and updated Omniverse Connectors for game engines are also now available.

The open-beta Omniverse Connector for Unity workflows helps users of Omniverse and Unity collaborate on projects. Developed by NVIDIA, the Connector delivers USD support alongside Unity workflows, enabling Unity users to take advantage of interoperable workflows. It offers Omniverse Nucleus connection and browsing, USD geometry export, lights, cameras, Material Definition Language and preview for USD materials. Early features also include physics export, USD import and unidirectional live sync.

And with the Unreal Engine Connector’s latest release, Omniverse users can now use Unreal Engine’s USD import utilities to add skeletal mesh blend shape importing, and Python USD bindings to access stages on Omniverse Nucleus. The latest release also delivers improvements in import, export and live workflows, as well as updated software development kits.

Learn more about these latest technologies by joining NVIDIA at GDC.

And catch up on all the groundbreaking announcements in generative AI and the metaverse by watching the NVIDIA GTC keynote.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

BMW Group Starts Global Rollout of NVIDIA Omniverse

BMW Group Starts Global Rollout of NVIDIA Omniverse

BMW Group is at the forefront of a key new manufacturing trend — going digital-first by using the virtual world to optimize layouts, robotics and logistics systems years before production really starts.

The automaker announced today with NVIDIA at GTC that it’s expanding its use of the NVIDIA Omniverse platform for building and operating industrial metaverse applications across its production network around the world, including the planned electric vehicle plant in Debrecen, Hungary, that will only start operations in 2025.

In his GTC keynote, NVIDIA founder and CEO Jensen Huang shared a demo in which he was joined by BMW Group’s Milan Nedeljković, member of the board of management, to officially open the automaker’s first entirely virtual factory, powered by NVIDIA Omniverse.

“We are excited and incredibly proud of the progress BMW has made with Omniverse. The partnership will continue to push the frontiers of virtual integration and virtual tooling for the next generation of smart-connected factories around the world,” Huang said during the GTC keynote.

Omniverse — the culmination of over 25 years of NVIDIA graphics, accelerated computing, simulation and AI technologies — enables manufacturing companies to plan and optimize multibillion-dollar factory projects entirely virtually. This means they can get to production faster and operate more efficiently, improving time to market, digitalization and sustainability.

The keynote demo highlights a virtual planning session for BMW’s Debrecen EV plant. With Omniverse, the BMW team can aggregate data into massive, high-performance models, connect their domain-specific software tools and enable multi-user live collaboration across locations. All of this is possible from any location, on any device.

Starting to work in the virtual factory two years before it opens enables the BMW Group to ensure smooth operation and optimal efficiency.

Virtual Integration for Real-World Efficiencies  

BMW Group’s virtual Debrecen plant illustrates the power and agility of planning AI-driven industrial manufacturing plants with the Omniverse platform.

In the EV factory demo, Nedeljković invites Huang into an update in which the BMW team seeks to include a robot in a constrained floor space. The team solves the problem on the fly, with logistics and production planners able to visualize and decide the ideal placement.

“This is transformative — we can design, build and test completely in a virtual world,” said Nedeljković.

It’s a lens into the future of BMW Group’s journey into digital transformation. It’s also a blueprint for reducing risks and ensuring success before committing to massive construction projects and capital expenditures.

This kind of digital transformation pays off. Putting in change orders and flow reoptimizations on existing facilities is extremely costly and causes production downtime. So having the ability to pre-optimize virtually eliminates such costs.

BMW Group Transforming Production Worldwide

BMW Group’s production network is poised to benefit from the digital transformation opportunities brought by Omniverse.

With factories and factory planners all over the world, BMW has a complex planning process. The automaker uses many software tools and processes to connect people across geographies and time zones, which comes with limitations.

With Omniverse, a development platform based on Universal Scene Description (USD), a 3D language that creates interoperability between software suites, BMW is able to bridge existing software and data repositories from leading industrial computer-aided design and engineering tools such as Siemens Process Simulate, Autodesk Revit, and Bentley Systems MicroStation.

With this unified view, BMW is powering its internal teams and external partners to collaborate and share knowledge and data from existing factories to help in the planning of new ones.

Additionally, the BMW team is developing a suite of custom applications with Omniverse, including a new application called Factory Explorer, based on Omniverse USD Composer, a customizable foundation application of the Omniverse platform. BMW used core components of USD Composer and added custom-built extensions tailored to its factory-planning teams’ needs, including finding, constructing, navigating, and analyzing factory data.

Omniverse Platform Accelerates Digital Twin Collaboration

The Omniverse platform enables BMW teams to collaborate across virtual factories from everywhere. A unified approach to data, allowing global changes in real time, lets BMW share updates across its teams.

With these new capabilities, BMW can now validate and test entirely in a virtual world, accelerating its time to production and improving efficiency across all of its plants.

To learn more about the latest in digitalization, watch NVIDIA founder and CEO Jensen Huang’s GTC keynote and these sessions featuring speakers from BMW:

Learn more about NVIDIA Omniverse

Read More

NVIDIA and Partners Ecosystem Release New Omniverse Connections, Expanding Foundation for Artists and Developers to Advance 3D Workflows

NVIDIA and Partners Ecosystem Release New Omniverse Connections, Expanding Foundation for Artists and Developers to Advance 3D Workflows

Developers and creators can better realize the massive potential of generative AI, simulation and the industrial metaverse with new Omniverse Connectors and other updates to NVIDIA Omniverse, a platform for creating and operating metaverse applications.

Omniverse Cloud, a platform-as-a-service unveiled today at NVIDIA GTC, equips users with a range of simulation and generative AI capabilities to easily build and deploy industrial metaverse applications.

New Omniverse Connectors and applications developed by third parties enable enterprises across the globe to push the limits of industrial digitalization.

Omniverse Ecosystem Expansion

Omniverse enhances how developers and professionals create, design and deploy massive virtual worlds, AI-powered digital humans and 3D assets.

Its newest additions include:

  • New Omniverse Connectors: Elevating connected workflows, new Omniverse Connectors for the Siemens Xcelerator portfolio — including Siemens Teamcenter, Siemens NX and Siemens Process Simulate — Blender, Cesium, Emulate3D by Rockwell Automation, Unity and Vectorworks are now available — linking more of the world’s most advanced applications through the Universal Scene Description (USD) framework. Azure Digital Twin, Blackshark.ai, FlexSim and NavVis Omniverse Connectors are coming soon.
  • SimReady 3D assets: Over 1,000 new SimReady assets enable easier AI and industrial 3D workflows. KUKA, a leading supplier of intelligent automation solutions, is working with NVIDIA and evaluating an adoption of the new SimReady specifications to make customer simulation easier than ever.
  • Synthetic data generation: Lexset and Siemens SynthAI are both using the Omniverse Replicator software development kit to enable computer-vision-aided industrial inspection. Datagen and Synthesis AI are using the SDK to create synthetic digital humans for AI training. And Deloitte is providing synthetic data generation services using Omniverse Replicator for customers across domains ranging from manufacturing to telecom. 

Available now is LumenRT for NVIDIA Omniverse, developed by Bentley Systems, which enables automatic synchronized changes to visualization workflows for infrastructure digital twins, and applications developed by SyncTwin.

Also available now is Aireal’s OmniStream, a web-embeddable and cloud-based extended reality digital twin platform that allows builders to give photorealistic 3D virtual tours to their buyers. Aireal’s Spaces, a visualization tool that enables automatic generation of home interior design, is coming soon.

And the disguise platform will now integrate to NVIDIA Omniverse, connecting the virtual production pipeline to allow for easier, quicker changes, enhanced content creation and improved media and entertainment workflows.

Run Omniverse Everywhere

NVIDIA also introduced systems and services making Omniverse more powerful and easier to access.

Next-generation NVIDIA RTX workstations are powered by NVIDIA Ada Lovelace GPUs, NVIDIA ConnectX-6 Dx SmartNICs and Intel Xeon processors.

The newly announced RTX 5000 Ada generation laptop GPU enables professionals to access Omniverse and industrial metaverse workloads in the office, at home or on the go.

Plus, NVIDIA introduced the third generation of OVX, a computing system for large-scale digital twins running within NVIDIA Omniverse Enterprise, powered by NVIDIA L40 GPUs and Bluefield-3 DPUs.

Omniverse Cloud will be available to global automotive companies, enabling them to realize digitalization across their industrial lifecycles from start to finish. Microsoft Azure is the first global cloud service provider to deploy the platform-as-a-service.

Learn more about Omniverse Cloud in the demo and our press release.

Customers Driving Innovation in Omniverse

Hundreds of enterprises are using Omniverse to transform their industrial lifecycles through digitalization, which improves the design, development and deployment of teams’ operations.

In his GTC keynote, NVIDIA founder and CEO Jensen Huang showcased how Lucid Motors is tapping Omniverse and USD workflows to enable automotive digitalization projects.

He also highlighted BMW Group’s use of Omniverse to build and deploy its upcoming electric vehicle factory in Debrecen, Hungary.

Core Updates Coming to Omniverse

Huang also gave a preview of the next Omniverse release coming this spring, which includes:

Updates to Omniverse apps that enable developers and enterprise customers to build on foundation applications to suit their specific workflows:

  • NVIDIA USD Composer (formerly Omniverse Create) — a customizable foundation application for designers and creators to assemble large-scale, USD-based datasets and compose industrial virtual worlds.
  • NVIDIA USD Presenter (formerly Omniverse View) — a customizable foundation application visualization reference app for showcasing and reviewing USD projects interactively and collaboratively.
  • NVIDIA USD-GDN Publisher — a suite of cloud services that enables developers and service providers to easily build, publish and stream advanced, interactive, USD-based 3D experiences to nearly any device in any location.

Improved developer experience — The new public extension registry enables users to receive automated updates to extensions. New configurator templates and workflows as well as an NVIDIA Warp Kernel Node for Omnigraph will enable zero-friction developer workflows for GPU-based coding.

Next-level rendering and materials — Omniverse is offering for the first time a real-time, ray-traced subsurface-scattering shader, enabling unprecedented realism in skin for digital humans. The latest update to Universal Material Mapper lets users seamlessly bring in material libraries from third-party applications, preserving material structure and full editing capability.

Groundbreaking performance — In a major development to enable massive large-scene performance, USD’s runtime data transfer technology provides an efficient method to store and move runtime data between modules. The scene optimizer allows users to run optimizations at USD level to convert large scenes into more lightweight representations for improved interactions.

AI training capabilities — Automatic domain randomization and population-based training make complex robotic training significantly easier for autonomous robotics development.

Generative AI — A new text-to-materials extension allows users to automatically generate high-quality materials solely from a text prompt. To accelerate usage of generative AI, updates within Omniverse also include text-to-materials and text-to-code generation tools. Additionally, updates to the Audio2Face app include headless mode, a REST application programming interface, improved lip-sync quality and more robust multi-language support including for Mandarin.

Developers can also use AI-generated inputs from technology such as ChatGPT to provide data to Omniverse extensions like Camera Studio, which generates and customizes cameras in Omniverse using data created in ChatGPT.

Register free for GTC, running through Thursday, March 23, to attend the GTC keynote and Omniverse sessions.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team. Stay up-to-date on the platform by subscribing to the newsletter, and following NVIDIA Omniverse on Instagram, Medium, and Twitter. For resources, check out our forums, Discord server, Twitch and YouTube channels.

Read More

NVIDIA Expands Isaac Software Access and Jetson Platform Availability, Accelerating Robotics From Cloud to Edge

NVIDIA Expands Isaac Software Access and Jetson Platform Availability, Accelerating Robotics From Cloud to Edge

NVIDIA announced today at GTC that Omniverse Cloud will be hosted on Microsoft Azure, increasing access to Isaac Sim, the company’s platform for developing and managing AI-based robots.

The company also said that a full lineup of Jetson Orin modules is now available, offering a performance leap for edge AI and robotics applications.

“The world’s largest industries make physical things, but they want to build them digitally,” said NVIDIA founder and CEO Jensen Huang during the GTC keynote. “Omniverse is a platform for industrial digitalization that bridges digital and physical.”

Isaac Sim on Omniverse Enterprise for Virtual Simulations

Building robots in the real world requires creating datasets from scratch, which is time consuming and expensive and slows deployments.

That’s why developers are turning to synthetic data generation (SDG), pretrained AI models, transfer learning and robotics simulation to drive down costs and accelerate deployment timelines.

The Omniverse Cloud platform-as-a-service, which runs on NVIDIA OVX servers, puts advanced capabilities into the hands of Azure developers everywhere. It enables enterprises to scale robotics simulation workloads, such as SDG, and provides continuous integration and continuous delivery for devops teams to work in a shared repository on code changes while working with Isaac Sim.

Isaac Sim is a robotics simulation application and SDG tool that drives photorealistic, physically accurate virtual environments. Isaac Sim, powered by the NVIDIA Omniverse platform, enables global teams to remotely collaborate to build, train, simulate, validate and deploy robots.

Making Isaac Sim accessible in the cloud allows teams to work together more effectively with access to the latest robotics tools and software development kits. Omniverse Cloud gives enterprises more options in the cloud with Azure, in addition to the existing cloud-based methods of using Isaac Sim for self-managed containers, or with using it on virtual workstations or fully managed services such as AWS RoboMaker.

And with access to Omniverse Replicator, an SDG engine in Isaac Sim, engineers can build production-quality synthetic datasets to train robust deep learning perception models.

Amazon uses Omniverse to automate, optimize and plan its autonomous warehouses with digital twin simulations before deployment into the real world. With Isaac Sim, Amazon Robotics is also improving the capabilities of Proteus, its latest autonomous mobile robot (AMR). This helps the online retail giant fulfill thousands of orders in a cost- and time-efficient manner.

Working with automation company idealworks, BMW Group uses Isaac Sim in Omniverse to generate synthetic data and run scenarios for testing and training AMRs and factory robots.

NVIDIA is developing across the AI tools spectrum — from computing in the cloud with simulation like Isaac Sim to at the edge with the Jetson platform — accelerating robotics adoption across industries.

Jetson Orin for Efficient, High-Performance Edge AI and Robotics 

NVIDIA Jetson Orin-based modules are now available in production to support a complete range of edge AI and robotics applications. This includes the Jetson Orin Nano — which provides up to 40 trillion operations per second (TOPS) of AI performance in the smallest Jetson module — up to the Jetson AGX Orin, delivering 275 TOPS for advanced autonomous machines.

The new Jetson Orin Nano Developer Kit delivers 80x the performance when compared with the previous-generation Jetson Nano, enabling developers to run advanced transformer and robotics models. And with 50x the performance per watt, developers getting started with the Jetson Orin Nano modules can build and deploy power-efficient, entry-level AI-powered robots, smart drones, intelligent vision systems and more.

Application-specific frameworks like NVIDIA Isaac ROS and DeepStream, which run on  the Jetson platform, are closely integrated with cloud-based frameworks like Isaac Sim on Omniverse and NVIDIA Metropolis. And using the latest NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NVIDIA NGC catalog reduces time to deployment for developers.

More than 1 million developers and over 6,000 customers have chosen the NVIDIA Jetson platform, including Amazon Web Services, Canon, Cisco, Hyundai Robotics, JD.com, John Deere, Komatsu, Medtronic, Meituan, Microsoft Azure, Teradyne and TK Elevator.

Companies adopting the new Orin-based modules include Hyundai Doosan Infracore, Robotis, Seyeon Tech, Skydio, Trimble, Verdant and Zipline.

More than 70 Jetson ecosystem partners are offering Orin-based solutions, with a wide range of support from hardware, AI software and application design services to sensors, connectivity and developer tools.

The full lineup of Jetson Orin-based production modules is now available. The Jetson Orin Nano Developer Kit will start shipping in April.

CTA: Learn more about NVIDIA Isaac Sim, Jetson Orin, Omniverse Enterprise and Metropolis.

Read More

AI Speeds Insurance Claims Estimates for Better Policyholder Experiences

AI Speeds Insurance Claims Estimates for Better Policyholder Experiences

CCC Intelligent Solutions (CCC) has become the first company in the auto insurance industry to deliver an AI-powered repair estimating solution, called CCC Estimate – STP, short for straight-through processing.

The Chicago-based auto-claims technology powerhouse uses AI, insurer-driven rules and CCC’s vast ecosystem to deliver repair estimates in seconds, instead of days. It’s a technological feat considering there are thousands of vehicle makes and models on the road, and countless repair permutations.

The company’s commitment to AI spans many years, with its first AI solutions hitting the market more than five years ago. Today, it’s working to bring AI and intelligent experiences to key facets of claims and mobility for its 30,000 customers, who process more than 16 million claims annually using CCC solutions.

“Our data scientists play a crucial role in creating new solutions and the ability to build models, experiment and easily integrate the model into our AI workflows is key,” said Reza Rooholamini, chief scientific officer at CCC.

CCC has four decades of expertise in automotive claims and collects millions of unstructured and structured automotive-claim data points every year. The combination of industry experience and raw data, however, is just the starting point for CCC’s efforts. The company runs a 100% cloud production environment, providing customers with a flexible platform for continuous innovation.

As a market leader, CCC regularly reports AI adoption among its customers to track progress. According to its 2023 AI Adoption report, the company reported that more than 14 million unique claims have been processed using CCC’s computer vision AI through 2022. The company also saw a 60% year-over-year increase in the application of advanced AI for claims processing.

And AI isn’t just being used to process more claims, it’s informing more decisions across the entire claims management experience. In fact, the number of claims processed with four or more of CCC’s AI applications has more than doubled, year-over-year.

CCC has built an end-to-end hybrid-cloud AI development and training pipeline to support its continuous innovation. This infrastructure uses over 150 NVIDIA A100 Tensor Core GPUs, including NVIDIA DGX systems on premises and additional resources within NVIDIA DGX Cloud.

The CCC development teams are using DGX Cloud to supplement on-prem capacity, support supercomputing demand spikes and accelerate AI model development overall.

“The AI pipeline we’ve built enables us to unleash all kinds of innovations,” said Neda Hantehzadeh, director of data science at CCC.

With 25-30% of its data scientists and engineering teams’ time dedicated to experimentation, coupled with massive datasets that are growing each day, CCC needed to enable a more scalable, multi-platform, hybrid multi-cloud for its training environment.

Using its AI pipeline, CCC launched CCC Estimate – STP, which can deliver a detailed line-level estimate of the collision repair cost based on insurer rules in seconds using AI and just a few pictures of vehicle damage taken from a smartphone. Traditional methods can take several days.

This saves time for adjusters, freeing them up for more complex work. This digitalized estimation process helps elevate the customer experience as well as lower processing costs and is currently being used by leading insurance companies across the U.S.

But the results are broader. Using the NVIDIA Base Command Platform integrated with their development pipeline for training job orchestration and data management, the CCC team realizes improved productivity. Data scientists can run experiments 2x faster, which can mean more learnings for more innovation and solution development.

“We run some experiments on premises on NVIDIA DGX systems, but we may have spikes where we want to add, for example, 10 million more data points and do another run,” Hantehzadeh said. “If we need additional capacity, we can switch to DGX Cloud. Base Command Platform makes this process seamless.”

CCC plans to continue taking its investment to the leading edge of AI development, injecting AI and STP into different channels and products across the property and casualty insurance economy.

Learn more about NVIDIA DGX Cloud and NVIDIA Base Command Platform.

Read More

Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI

Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI

As a sports commentator for a professional lacrosse team, Grant Farhall knows the value in having the right teammates.

As the chief product officer for Getty Images, a global visual-content creator and marketplace, he believes the collaboration between his company and NVIDIA is an excellent pairing for taking generative AI to the next level.

The companies aim to develop two generative AI models using NVIDIA Picasso, part of the new NVIDIA AI Foundations cloud services. Users could employ the models to create a custom image or video in seconds, simply by typing in a concept.

“With our high quality and often unique imagery and videos, this collaboration will give our customers the ability to create a greater variety of visuals than ever before,  helping creatives and non-creatives alike fuel visual storytelling,” Farhall said.

Getty Images is a unique partner, not only for its stunning images and video, but also its rich metadata, with appropriate rights. Its creative team and research bring a wealth of expertise that can deliver impactful outputs.

For artists, generative AI adds a new tool that expands their canvas. For content creators, it’s an opportunity to create a custom visual tailored to a brand or business they’re building.

“More often than not, it’s a visual that cuts through the noise of a busy world to capture your attention, and being able to stand out from the crowd is crucial for businesses of all shapes and sizes,” Farhall said.

Building Responsible AI

But, as in lacrosse, you need to play by the rules.

The models will be trained on Getty Images’ fully licensed content, and revenue generated from the models will provide royalties to content creators.

“Both companies want to develop these tools in a responsible way that returns benefits to creators and doesn’t pass risks on to customers, and this collaboration is testament to the fact that’s possible,” he said.

A Time-Tested Relationship

It’s not the first inning for this collaboration.

“We’ve been fostering and growing a relationship for some time — NVIDIA brings the tech expertise and talent, and we bring the high quality and unique content and marketplace,” said Farhall.

The technology, values and connections are catalysts for experiences that wow creators and users. It’s a feeling Farhall shares, sitting in front of his mic on a Saturday night.

“There’s an adrenaline rush when the live action of a game becomes your singular focus and you’re just in the moment,” he said.

And by training a custom model with NVIDIA Picasso, Getty Images and NVIDIA aim to help storytellers everywhere create more moments that perfectly capture their audiences’ attention.

To learn more about what NVIDIA is doing in generative AI and beyond, watch company founder and CEO Jensen Huang’s GTC keynote below.

Image at top courtesy Roberto Moiola/Sysaworld/Getty Images.

Read More

Mind the Gap: Large Language Models Get Smarter With Enterprise Data

Mind the Gap: Large Language Models Get Smarter With Enterprise Data

Large language models available today are incredibly knowledgeable, but act like time capsules — the information they capture is limited to the data available when they were first trained. If trained a year ago, for example, an LLM powering an enterprise’s AI chatbot won’t know about the latest products and services at the business.

With the NVIDIA NeMo service, part of the newly announced NVIDIA AI Foundations family of cloud services, enterprises can close the gap by augmenting their LLMs with proprietary data, enabling them to frequently update a model’s knowledge base without having to further train it — or start from scratch.

This new functionality in the NeMo service enables large language models to retrieve accurate information from proprietary data sources and generate conversational, human-like answers to user queries. With this capability, enterprises can use NeMo to customize large language models with regularly updated, domain-specific knowledge for their applications.

This can help enterprises keep up with a constantly changing landscape across inventory, services and more, unlocking capabilities such as highly accurate AI chatbots, enterprise search engines and market intelligence tools.

NeMo includes the ability to cite sources for the language model’s responses, increasing user trust in the output. Developers using NeMo can also set up guardrails to define the AI’s area of expertise, providing better control over the generated responses.

Quantiphi — an AI-first digital engineering solutions and platforms company and one of NVIDIA’s service delivery partners — is working with NeMo to build a modular generative AI solution called baioniq that will help enterprises build customized LLMs to boost worker productivity. Its developer teams are creating tools that let users search up-to-date information across unstructured text, images and tables in seconds.

Bringing Dark Data Into the Light

Analysts estimate that around two-thirds of enterprise data is untapped. This so-called dark data is unused partly because it’s difficult to glean meaningful insights from vast troves of information. Now, with NeMo, businesses can retrieve insights from this data using natural language queries.

NeMo can help enterprises build models that can learn from and react to an evolving knowledge base — independent of the dataset that the model was originally trained on. Rather than needing to retrain an LLM to account for new information, NeMo can tap enterprise data sources for up-to-date details. Additional information can be added to expand the model’s knowledge base without modifying its core capabilities of language processing and text generation.

Enterprises can also use NeMo to build guardrails so that generative AI applications don’t provide opinions on topics outside their defined area of expertise.

Enabling a New Wave of Generative AI Applications for Enterprises

By customizing an LLM with business data, enterprises can make their AI applications agile and responsive to new developments. 

  • Chatbots: Many enterprises already use AI chatbots to power basic customer interactions on their websites. With NeMo, companies could build virtual subject-matter experts specific to their domains.
  • Customer service: Companies could update NeMo models with details about their latest products, helping live service representatives more easily answer customer questions with precise, up-to-date information.
  • Enterprise search: Businesses have a wealth of knowledge across the organization, including technical documentation, company policies and IT support articles. Employees could query a NeMo-powered internal search engine to retrieve information faster and more easily.
  • Market intelligence: The financial industry collects insights about global markets, public companies and economic trends. By connecting an LLM to a regularly updated database, investors and other experts could quickly identify useful details from a large set of information, such as regulatory documents, recordings of earnings calls or financial statements.

Enterprises interested in adding generative AI capabilities to their applications can apply for early access to the NeMo service.

Watch NVIDIA founder and CEO Jensen Huang discuss NVIDIA AI Foundations in the keynote address at NVIDIA GTC, running online through Thursday, March 23:

Read More

Green Light: NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

Green Light: NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

The results are in, and they point to a new era in energy-efficient computing.

In tests of real workloads, the NVIDIA Grace CPU Superchip scored 2x performance gains over x86 processors at the same power envelope across major data center CPU applications. That opens up a whole new set of opportunities.

It means data centers can handle twice as much peak traffic. They can slash their power bills by as much as half. They can pack more punch into the confined spaces at the edge of their networks — or any combination of the above.

Energy Efficiency, a Data Center Priority

Data center managers need these options to thrive in today’s energy-efficient era.

Moore’s law is effectively dead. Physics no longer lets engineers pack more transistors in the same space at the same power.

That’s why new x86 CPUs typically offer gains over prior generations of less than 30%. It’s also why a growing number of data centers are power capped.

With the added threat of global warming, data centers don’t have the luxury of expanding their power, but they still need to respond to the growing demands for computing.

Wanted: Same Power, More Performance

Compute demand is growing 10% a year in the U.S., and will double in the eight years from 2022-2030, according to a McKinsey study.

“Pressure to make data centers sustainable is therefore high, and some regulators and governments are imposing sustainability standards on newly built data centers,” it said.

With the end of Moore’s law, the data center’s progress in computing efficiency has stalled, according to a survey that McKinsey cited (see chart below).

Power efficiency gains have stalled in data centers, McKinsey said.

In today’s environment, the 2x gains NVIDIA Grace offers are the eye-popping equivalent of a multi-generational leap. It meets the requirements of today’s data center executives.

Zac Smith — the head of edge infrastructure at Equinix, a global service provider that manages more than 240 data centers — articulated these needs in an article about energy-efficient computing.

“The performance you get for the carbon impact you have is what we need to drive toward,” he said.

“We have 10,000 customers counting on us for help with this journey. They demand more data and more intelligence, often with AI, and they want it in a sustainable way,” he added.

A Trio of CPU Innovations

The Grace CPU delivers that efficient performance thanks to three innovations.

It uses an ultra-fast fabric to connect 72 Arm Neoverse V2 cores in a single die that sports 3.2 terabytes per second in fabric bisection bandwidth, a standard measure of throughput. Then it connects two of those dies in a superchip  package with the NVIDIA NVLink-C2C interconnect, delivering 900 GB/s of bandwidth.

Finally, it’s the first data center CPU to use server-class LPDDR5X memory. That provides up to 50% more memory bandwidth at similar cost but one-eighth the power of typical server memory. And its compact size enables 2x the density of typical card-based memory designs.

The Grace CPU is simpler and more energy efficient than current x86 CPUs
Compared to current x86 CPUs, NVIDIA Grace is a simpler design that offers more bandwidth and uses less power.

The First Results Are In

NVIDIA engineers are running real data center workloads on Grace today.

They found that compared to the leading x86 CPUs in data centers using the same power footprint, Grace is:

  • 2.3x faster for microservices,
  • 2x faster in memory intensive data processing
  • and 1.9 x faster in computational fluid dynamics, used in many technical computing apps.

Data centers usually have to wait two or more CPU generations to get these benefits, summarized in the chart below.

Grace outperforms x86 CPUs
Net gains (in light green) are the product of server-to-server advances (in dark green) and additional Grace servers that fit in the same x86 power envelope (middle bar) thanks to the energy efficiency of Grace.

Even before these results on working CPUs, users responded to the innovations in Grace.

The Los Alamos National Laboratory announced in May it will use Grace in Venado, a 10 exaflop AI supercomputer that will advance the lab’s work in areas such as materials science and renewable energy. Meanwhile, data centers in Europe and Asia are evaluating Grace for their workloads.

NVIDIA Grace is sampling now with production in the second half of the year. ASUS, Atos, GIGABYTE, Hewlett Packard Enterprise, QCT, Supermicro, Wistron and ZT Systems are building servers that use it.

Go Deep on Sustainable Computing

To dive into the details, read this whitepaper on the Grace architecture.

Learn more about sustainable computing from this session at NVIDIA GTC (March 20-23, free with registration): Three Strategies to Maximize Your Organization’s Sustainability and Success in an End-to-End AI World.

Read a whitepaper about the NVIDIA BlueField DPU to find out how to build energy-efficient networks.

And watch NVIDIA founder and CEO Jensen Huang’s GTC keynote to get the big picture.

Read More

NVIDIA Announces Microsoft, Tencent, Baidu Adopting CV-CUDA for Computer Vision AI

NVIDIA Announces Microsoft, Tencent, Baidu Adopting CV-CUDA for Computer Vision AI

Microsoft, Tencent and Baidu are adopting NVIDIA CV-CUDA for computer vision AI.

NVIDIA CEO Jensen Huang highlighted work in content understanding, visual search and deep learning Tuesday as he announced the beta release for NVIDIA’s CV-CUDA — an open-source, GPU-accelerated library for computer vision at cloud scale.

“Eighty percent of internet traffic is video, user-generated video content is driving significant growth and consuming massive amounts of power,” said Huang in his keynote at NVIDIA’s GTC technology conference. “We should accelerate all video processing and reclaim the power.”

CV-CUDA promises to help companies across the world build and scale end-to-end, AI-based computer vision and image processing pipelines on GPUs.

Optimizing Internet-Scale Visual Computing With AI

The majority of internet traffic is video and image data, driving incredible scale in applications such as content creation, visual search and recommendation, and mapping.

These applications use a specialized, recurring set of computer vision and image-processing algorithms to process image and video data before and after they’re processed by neural networks.

Microsoft Bing’s Visual Search Engine uses AI Computer Vision
to search for images (dog food, for example) within images on the Internet.

While neural networks are normally GPU accelerated, the computer vision and image processing algorithms that support them are often CPU bottlenecks in today’s AI applications.

CV-CUDA helps process 4x as many streams on a single GPU by transitioning the pre- and post-processing steps from CPU to GPU. In effect, it processes the same workloads at a quarter of the cloud-computing cost.

The CV-CUDA library provides developers more than 30 high-performance computer vision algorithms with native Python APIs and zero-copy integration with the PyTorch, TensorFlow2, ONNX and TensorRT machine learning frameworks.

The result is higher throughput, reduced computing cost and a smaller carbon footprint for cloud AI businesses.

Global Adoption for Computer Vision AI

Adoption by industry leaders around the globe highlights the benefits and versatility of CV-CUDA for a growing number of large-scale visual applications. Companies with massive image processing workloads can save tens to hundreds of millions of dollars.

Microsoft is working to integrate CV-CUDA into Bing Visual Search, which lets users search the web using an image instead of text to find similar images, products and web pages.

In 2019, Microsoft shared at GTC how they’re using NVIDIA technologies to help bring speech recognition, intelligent answers, text to speech technology and object detection together seamlessly and in real time.

Tencent has deployed CV-CUDA to accelerate its ad creation and content understanding pipelines, which process more than 300,000 videos per day.

The Shenzhen-based multimedia conglomerate has achieved a 20% reduction in energy and cost for image processing over their previous GPU-optimized pipelines.

And Beijing-based search giant Baidu is integrating CV-CUDA into FastDeploy, one of the open-source deployment toolkits of the PaddlePaddle Deep Learning Framework, which enables seamless computer vision acceleration to developers in the open-source community.

From Content Creation to Automotive Use Cases

Applications for CV-CUDA are growing. More than 500 companies have reached out with over 100 use cases in just the first few months of the alpha release.

In content creation and e-commerce, images use pre- and post-processing operators to help recommender engines recognize, locate and curate content.

In mapping, video ingested from mapping survey vehicles requires preprocessing and post-processing operators to train neural networks in the cloud to identify infrastructure and road features.

In infrastructure applications for self-driving simulation and validation software, CV-CUDA enables GPU acceleration for algorithms that are already occurring in the vehicle, such as color conversion, distortion correction, convolution and bilateral filtering.

Looking to the future, generative AI is transforming the world of video content creation and curation, allowing creators to reach a global audience.

New York-based startup Runway has integrated CV-CUDA, alleviating a critical bottleneck in preprocessing high-resolution videos in their video object segmentation model.

Implementing CV-CUDA led to a 3.6x speedup, enabling Runway to optimize real-time, click-to-content responses across its suite of creation tools.

“For creators, every second it takes to bring an idea to life counts,” said Cristóbal Valenzuela, co-founder and CEO of Runway. “The difference CV-CUDA makes is incredibly meaningful for the millions of creators using our tools.”

To access CV-CUDA, visit the CV-CUDA GitHub.

Or learn more by checking out the GTC sessions featuring CV-CUDA. Registration is free.

Read More

NVIDIA CEO to Reveal What’s Next for AI at GTC

NVIDIA CEO to Reveal What’s Next for AI at GTC

The secret’s out. Thanks to ChatGPT, everyone knows about the power of modern AI.

To find out what’s coming next, tune in to NVIDIA founder and CEO Jensen Huang’s keynote address at NVIDIA GTC on Tuesday, March 21, at 8 a.m. Pacific.

Huang will share his vision for the future of AI and how NVIDIA is accelerating it with breakthrough technologies and solutions. There couldn’t be a better time to get ready for what’s to come.

NVIDIA is a pioneer and leader in AI thanks to its powerful graphics processing units that have enabled new computing models like accelerated computing.

NVIDIA GPUs sparked the modern AI revolution by making deep neural networks faster and more efficient.

Today, NVIDIA GPUs power AI applications in every industry, from computer vision to natural language processing, from robotics to healthcare, and from gaming to chatbots.

GTC, which runs online March 20-23, is the conference for AI and the metaverse. It features more than 650 sessions on deep learning, computer vision, natural language processing, robotics, healthcare, gaming and more.

Speakers from Adobe, Amazon, Autodesk, Deloitte, Ford Motor, Google, IBM, Jaguar Land Rover, Lenovo, Meta, Netflix, Nike, OpenAI, Pfizer, Pixar, Subaru and more will all discuss their latest work.

Don’t miss out on talks from leaders such aas Demis Hassabis of DeepMind, Valeri Taylor of Argonne Labs, Scott Belsky of Adobe, Paul Debevec of Netflix, Thomas Schulthess of ETH Zurich, and a special fireside chat between Huang and Ilya Sutskever, co-founder of OpenAI, the creator of ChatGPT.

You can watch the keynote live or on demand. Register for free at https://www.nvidia.com/en-us/gtc/.

You can also join the conversation on social media using #GTC23.

Read More