Generating content and code. Creating images and videos. Testing algorithms with synthetic data.
Generative AI is a force multiplier enabling leaps in productivity and creativity for nearly every industry, particularly transportation, where it’s streamlining workflows and driving new business.
Across the entire auto industry, companies are exploring generative AI to improve vehicle design, engineering, and manufacturing, as well as marketing and sales.
Beyond the automotive product lifecycle, generative AI is also enabling new breakthroughs in autonomous vehicle (AV) development. Such research areas include the use of neural radiance field (NeRF) technology to turn recorded sensor data into fully interactive 3D simulations. These digital twin environments, as well as synthetic data generation, can be used to develop, test and validate AVs at incredible scale.
Merge Ahead: Transformative Use Cases
Generative AI, large language models and recommender systems are the digital engines of the modern economy, NVIDIA founder and CEO Jensen Huang said.
Foundational models — like ChatGPT for text generation and Stable Diffusion for image generation — can support AI systems capable of multiple tasks. This unlocks many possibilities.
Much like when early iPhone app developers began using GPS, accelerometers and other sensors to create mobile applications, AI developers now can tap foundation models to build new experiences and capabilities.
Generative AI can help tie different data streams together, not just text to text, or text to image, but also with inputs and outputs like video or 3D. Using this powerful new computing model, a text prompt could return a physically accurate layout of an assembly plant.
Toyota, one of the world’s largest automakers, has developed a generative AI technique to ensure that early design sketches incorporate engineering parameters.
Meanwhile, Mercedes-Benz has demonstrated a ChatGPT-enabled voice assistant.
Other automotive industry players are also looking to generative AI to help accelerate design iterations and provide better results.
Designer and Artist Workflows Poised to Benefit
Currently, it takes designers and artists months of preparation and design reviews to progress from early concept ideation and sketching through to the development of full scale models. This is often hampered by incompatible tools, siloed data and serial workflows.
Artists often begin the design process by looking for “scrap,” or visual references, based on trends in automotive styling. They seek inspiration for design cues, pulling from image libraries based on keywords.
The process involves looking at vehicles across the industry, whether existing or historic. Then, with a great deal of human curation, some blend of popular designs and fresh inspirations based on a company’s stylings emerge. That forms the basis for artists’ 2D hand-drawn sketches that are then recreated as 3D models and clay prototypes.
These linear and time-consuming design concept processes are utilized for exterior parts like grilles, hoods and wheels, as well as interior aspects such as dashboards, seats, ergonomics and user interfaces.
To develop these 3D models, automotive styling teams work with engineers in tools like Autodesk Alias or Maya to develop “NURBS” models, short for non-uniform rational B-splines. The resulting mathematical representations of 3D geometry capture the shapes from 2D drafts. The end deliverable is a 3D representation that’s the result of bespoke styling, design and engineering work and can be used in computer-aided design applications to define surfaces.
The automotive industry now has an opportunity to use generative AI to instantly transform 2D sketches into NURBS models for leaps in productivity. These tools will not replace designers, but enable them to explore a wide range of options faster.
Generative AI Riding Shotgun on Concept and Styling
Design-oriented enterprises can use visual datasets and generative AI to assist their work across many fronts. This has already been achieved with coding tools such as GitHub Copilot — trained on billions of lines of code — and similarly promises to help compress lengthy design timelines.
In particular, when looking for “scrap” design elements, generative AI models can be trained on an automaker’s portfolio as well as vehicles industrywide, assisting this workflow. This can happen first by fine-tuning a small dataset of images with transfer learning, and then by tapping into NVIDIA TAO Toolkit. Or it might require a more robust dataset of some 100 million images, depending on the requirements of the generative AI model.
In this bring-your-own-model setup, design teams and developers could harness NVIDIA Picasso — a cloud-based foundry for building generative AI models for visual design — with Stable Diffusion.
In this case, designers and artists prompt the generative AI for design elements, such as “rugged,” “sophisticated” or “sleek.” It then generates examples from the external world of automakers as well as from a company’s internal catalogs of images, vastly accelerating this initial phase.
For vehicle interiors, large language models for text-to-image generation can enable designers to type in a description of a texture, like a floral pattern, and the generative AI will put it onto the surface of a seat, door panel or dashboard. If a designer wants to use a particular image to generate an interior design texture, generative AI can handle image-to-image texture creation.
Smart Factories Getting Generative AI Edge
Manufacturers developing smart factories are adopting Omniverse and generative AI application programming interfaces to connect design and engineering tools to build digital twins of their facilities. BMW Group is starting the global rollout of NVIDIA Omniverse to support its vision for a factory of the future.
When building manufacturing facilities, planning in simulation before launching into production helps to reduce costly change orders that can shut down factory lines.
Generative AI Benefits Marketing and Retail Sales
Generative AI is also making inroads in marketing and retail sales departments across many industries worldwide. These teams are expected to see a productivity lift from generative AI of more than $950 billion, according to a McKinsey report.
For instance, many are adopting ChatGPT to investigate, brainstorm and get feedback on writing topics to get a jump on marketing copy and advertising campaigns. Text-to-image generative AI is helping to support visual efforts in marketing and sales.
NVIDIA NeMo is a framework to build, customize and deploy generative AI models. It’s optimized to do inference for language and image applications and used in automated speech recognition, helping improve customer support with large language models. Automakers can develop next-generation customer service chatbots using its generative AI.
London advertising giant WPP and NVIDIA are working on a groundbreaking generative AI-enabled content engine to assist the $700 billion digital advertising industry.
Today ads are retrieved, but in the future when you engage information much of it will be generated — the computing model has changed, said Huang.
This innovative system is built with NVIDIA AI and Omniverse Cloud — a software platform for developing unified 3D workflows and OpenUSD applications — and offers automotive OEMs capabilities to help create highly personalized visual content faster and more efficiently.
In Omniverse, creative teams take advantage of OpenUSD to unify their complex 3D pipelines, seamlessly connecting design tools such as Adobe Substance 3D, Alias, and VRED to develop digital twins of client products. Accessing generative AI tools will enable content creation from trained datasets and built with NVIDIA Picasso, producing virtual sets. This will give WPP clients complete scenes to generate various ads, videos and 3D experiences.
DENZA, BYD’s joint venture with Mercedes-Benz, is relying on WPP to build and deploy the first of its kind car configurators with Omniverse Cloud.
Running on Generative AI: Faster, Better, Cheaper Everywhere
Generative AI’s contextual understanding, creative output and adaptive learning capacities mark a new era.
What began with the transformer model discovery has since unleashed incredible results, supported by massive models whose training has been made possible with leaps in performance from NVIDIA accelerated computing.
While it’s still early days, and therefore hard to quantify the full implications of this shift, automakers are embracing industry-specific “copilots” for design, engineering, manufacturing, marketing and sales to achieve better, more efficient and less expensive operations.
And they’re just getting started.
See how NVIDIA AI and Omniverse are revolutionizing the automotive industry from end to end.