The generative AI revolution — with deep roots in visual computing — is amplifying human creativity even as accelerated computing promises significant gains in energy efficiency, NVIDIA founder and CEO Jensen Huang said Monday.
That makes this week’s SIGGRAPH professional graphics conference, in Denver, the logical venue to discuss what’s next.
“Everybody will have an AI assistant,” Huang said. “Every single company, every single job within the company, will have AI assistance.”
But even as generative AI promises to amplify human productivity, Huang said the accelerated computing technology that underpins it promises to make computing more energy efficient.
“Accelerated computing helps you save so much energy, 20 times, 50 times, and doing the same processing,” Huang said. “The first thing we have to do, as a society, is accelerate every application we can: this reduces the amount of energy being used all over the world.”
The conversation follows a spate of announcements from NVIDIA today.
NVIDIA introduced a new suite of NIM microservices tailored for diverse workflows, including OpenUSD, 3D modeling, physics, materials, robotics, industrial digital twins and physical AI.
These advancements aim to enhance developer capabilities, particularly with the integration of Hugging Face Inference-as-a-Service on DGX Cloud.
In addition, Shutterstock has launched a Generative 3D Service, while Getty Images has upgraded its offerings using NVIDIA Edify technology.
In the realm of AI and graphics, NVIDIA has revealed new OpenUSD NIM microservices and reference workflows designed for generative physical AI applications.
This includes a program for accelerating humanoid robotics development through new NIM microservices for robotics simulation and more.
Finally, WPP, the world’s largest advertising agency, is using Omniverse-driven generative AI for The Coca-Cola Company, helping drive brand authenticity, showcasing the practical applications of NVIDIA’s advancements in AI technology across various industries.
Huang and Goode started their conversation by exploring how visual computing gave rise to everything from computer games to digital animation to GPU-accelerated computing and, most recently, generative AI powered by industrial-scale AI factories.
All these advancements build on one another. Robotics, for example, requires advanced AI and photorealistic virtual worlds where AI can be trained before being deployed into next-generation humanoid robots.
Huang explained that robotics requires three computers: one to train the AI, one to test the AI in a physically accurate simulation, and one within the robot itself.
“Just about every industry is going to be affected by this, whether it’s scientific computing trying to do a better job predicting the weather with a lot less energy, to augmenting and collaborating with creators to generate images, or generating virtual scenes for industrial visualization,” Huang said. “Robotic self-driving cars are all going to be transformed by generative AI.”
Likewise, NVIDIA Omniverse systems — built around the OpenUSD standard — will also be key to harnessing generative AI to create assets that the world’s largest brands can use.
By pulling from brand assets that live in Omniverse, which can capture brand assets, these systems can capture and replicate carefully curated brand magic.
Finally, all these systems — visual computing, simulation and large-language models — will come together to create digital humans who can help people interact with digital systems of all kinds.
“One of the things that we’re announcing here this week is the concept of digital agents, digital AIs that will augment every single job in the company,” Huang said.
“And so one of the most important use cases that people are discovering is customer service,” Huang said. “In the future, my guess is that it’s going to be human still, but AI in the loop.”
All of this, like any new tool, promises to amplify human productivity and creativity. “Imagine the stories that you’re going to be able to tell with these tools,” Huang said.