Into the Omniverse: Reallusion Elevates Character Animation Workflows With Two-Way Live Sync and OpenUSD Support

Into the Omniverse: Reallusion Elevates Character Animation Workflows With Two-Way Live Sync and OpenUSD Support

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Whether animating a single 3D character or generating a group of them for industrial digitalization, creators and developers who use the popular Reallusion software can boost their workflows with the latest update to the iClone Omniverse Connector released this month.

The upgrade enables seamless collaboration and expands creative possibilities for creators using NVIDIA Omniverse, a development platform for connecting and building OpenUSD-based tools and applications.

New features include real-time synchronization of projects, as well as enhanced import functionality for the Universal Scene Description framework, known as OpenUSD, which makes work between iClone and Omniverse quicker, smoother and more efficient. The update also comes with bug fixes and improvements.

Animating 3D Characters Better, Together

Creators across the world are using Reallusion iClone, a real-time 3D animation software, to bring their characters to life.

Solomon Jagwe — a 3D artist, animator and award-winning film director — uses Reallusion iClone and Omniverse for his work, which often focuses on environmental themes.

Jagwe, who grew up in East Africa, recalls fond childhood memories drawing the creatures he’d see when he ventured into the countryside with his brother. Even now, much of his 3D work begins with a simple sketch using pen and paper.

The artist said he always strives to create art that makes a difference.

For example, Jagwe created Adventures of Nkoza and Nankya, a video series for educating people of all ages on Ugandan culture. He modeled the sets for the series in Autodesk 3ds Max and Autodesk Maya, animated in Reallusion iClone and composed in Omniverse.

“With the iClone Connector for Omniverse, I can easily render my iClone animations in Omniverse and take advantage of the iClone animation tools in combination with the Omniverse Audio2Face generative AI capabilities,” Jagwe said.

Jagwe’s entire creative pipeline is accelerated by USD, which acts as a common language between 3D applications and enables sharing full scenes across content-creation tools.

“OpenUSD makes it so much easier to transport all the textures and characters together in one place for animation,” Jagwe said. The artist added that he hopes his work inspires other indie filmmakers to bring their story ideas to life using iClone and Omniverse.

A scene from Jagwe’s educational series, “Adventures of Nkoza and Nankya.”

Another indie filmmaker, Benjamin Sokomba Dazhi, aka Benny Dee, has also mastered the art of animation. He’s landed roles as head animator for the film The Legend of Oronpoto as well as creator and director of the Cartoon Network Africa Dance Challenge.

Dazhi uses Omniverse with Reallusion’s iClone and Character Creator to supercharge his artistic workflow.

“The main challenges I faced when trying to meet deadlines were long render times and difficulties with software compatibility, but using an Omniverse Connector for Reallusion’s iClone app has been game-changing for my workflow,” he said.

A scene from one of Dhazi’s animated music videos.

Several other Omniverse community members recently joined a livestream to share their workflows using Reallusion and Omniverse. Watch the stream on demand:

Sync in Style With New Connector Updates

The updated Reallusion iClone Connector for Omniverse offers powerful integrations between the two platforms.

Users can now seamlessly synchronize their projects in real time thanks to new bidirectional live-sync capabilities. This means changes made in either iClone or Omniverse can be automatically reflected back to the other. Such bidirectional synchronization can be applied to animation-related changes for characters, such as skeletal and morph animation.

The iClone Connector also enables enhanced USD import capabilities. Users can now import static meshes, cameras and lights from Omniverse directly into iClone. This improved functionality includes a filter that allows importing assets with optimal efficiency based on their types.

See how designers can now preview Omniverse renders in real time while animating in iClone, as they enjoy seamless two-way USD data transfer:

Get Plugged Into the Omniverse 

Don’t miss our community livestream next week with Reallusion VP John Martin to learn all about the ways the iClone Omniverse Connector can advance your 3D character animation pipeline.

Watch NVIDIA founder and CEO Jensen Huang’s keynote address at SIGGRAPH on demand to learn about the latest breakthroughs in graphics, research, OpenUSD and AI.

Like Reallusion, learn how anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools.

Share your Reallusion and Omniverse work as part of the latest community challenge, #StartToFinish. Use the hashtag to submit a screenshot of a project featuring both its beginning and ending stages for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team.
Developers can check out these Omniverse resources to begin building on the platform.


Stay up to date on the platform by subscribing to the newsletter and following NVIDIA Omniverse on Instagram, LinkedIn, Medium, Threads and Twitter.

For more, check out our forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Reallusion.

Read More

FineRecon: Depth-aware Feed-forward Network for Detailed 3D Reconstruction

Recent works on 3D reconstruction from posed images have demonstrated that direct inference of scene-level 3D geometry without iterative optimization is feasible using a deep neural network, showing remarkable promise and high efficiency. However, the reconstructed geometries, typically represented as a 3D truncated signed distance function (TSDF), are often coarse without fine geometric details. To address this problem, we propose three effective solutions for improving the fidelity of inference-based 3D reconstructions. We first present a resolution-agnostic TSDF supervision strategy to…Apple Machine Learning Research

Consistent Collaborative Filtering via Tensor Decomposition

Collaborative filtering is the de facto standard for analyzing users’ activities and building recommendation systems for items. In this work we develop Sliced Anti-symmetric Decomposition (SAD), a new model for collaborative filtering based on implicit feedback. In contrast to traditional techniques where a latent representation of users (user vectors) and items (item vectors) are estimated, SAD introduces one additional latent vector to each item, using a novel three-way tensor view of user-item interactions. This new vector ex-tends user-item preferences calculated by standard dot products…Apple Machine Learning Research

Dataset and Network Introspection ToolKit (DNIKit)

We introduce the Data and Network Introspection toolkit DNIKit, an open source Python framework for analyzing machine learning models and datasets. DNIKit contains a collection of algorithms that all operate on intermediate network responses, providing a unique understanding of how the network perceives data throughout the different stages of computation.
With DNIKit, you can:

create a comprehensive dataset analysis report
find dataset samples that are near duplicates of each other
discover rare data samples, annotation errors, or model biases
compress networks by removing highly correlated…Apple Machine Learning Research

STUDY: Socially aware temporally causal decoder recommender systems

STUDY: Socially aware temporally causal decoder recommender systems

Reading has many benefits for young students, such as better linguistic and life skills, and reading for pleasure has been shown to correlate with academic success. Furthermore students have reported improved emotional wellbeing from reading, as well as better general knowledge and better understanding of other cultures. With the vast amount of reading material both online and off, finding age-appropriate, relevant and engaging content can be a challenging task, but helping students do so is a necessary step to engage them in reading. Effective recommendations that present students with relevant reading material helps keep students reading, and this is where machine learning (ML) can help.

ML has been widely used in building recommender systems for various types of digital content, ranging from videos to books to e-commerce items. Recommender systems are used across a range of digital platforms to help surface relevant and engaging content to users. In these systems, ML models are trained to suggest items to each user individually based on user preferences, user engagement, and the items under recommendation. These data provide a strong learning signal for models to be able to recommend items that are likely to be of interest, thereby improving user experience.

In “STUDY: Socially Aware Temporally Causal Decoder Recommender Systems”, we present a content recommender system for audiobooks in an educational setting taking into account the social nature of reading. We developed the STUDY algorithm in partnership with Learning Ally, an educational nonprofit, aimed at promoting reading in dyslexic students, that provides audiobooks to students through a school-wide subscription program. Leveraging the wide range of audiobooks in the Learning Ally library, our goal is to help students find the right content to help boost their reading experience and engagement. Motivated by the fact that what a person’s peers are currently reading has significant effects on what they would find interesting to read, we jointly process the reading engagement history of students who are in the same classroom. This allows our model to benefit from live information about what is currently trending within the student’s localized social group, in this case, their classroom.

Data

Learning Ally has a large digital library of curated audiobooks targeted at students, making it well-suited for building a social recommendation model to help improve student learning outcomes. We received two years of anonymized audiobook consumption data. All students, schools and groupings in the data were anonymized, only identified by a randomly generated ID not traceable back to real entities by Google. Furthermore all potentially identifiable metadata was only shared in an aggregated form, to protect students and institutions from being re-identified. The data consisted of time-stamped records of student’s interactions with audiobooks. For each interaction we have an anonymized student ID (which includes the student’s grade level and anonymized school ID), an audiobook identifier and a date. While many schools distribute students in a single grade across several classrooms, we leverage this metadata to make the simplifying assumption that all students in the same school and in the same grade level are in the same classroom. While this provides the foundation needed to build a better social recommender model, it’s important to note that this does not enable us to re-identify individuals, class groups or schools.

The STUDY algorithm

We framed the recommendation problem as a click-through rate prediction problem, where we model the conditional probability of a user interacting with each specific item conditioned on both 1) user and item characteristics and 2) the item interaction history sequence for the user at hand. Previous work suggests Transformer-based models, a widely used model class developed by Google Research, are well suited for modeling this problem. When each user is processed individually this becomes an autoregressive sequence modeling problem. We use this conceptual framework to model our data and then extend this framework to create the STUDY approach.

While this approach for click-through rate prediction can model dependencies between past and future item preferences for an individual user and can learn patterns of similarity across users at train time, it cannot model dependencies across different users at inference time. To recognise the social nature of reading and remediate this shortcoming we developed the STUDY model, which concatenates multiple sequences of books read by each student into a single sequence that collects data from multiple students in a single classroom.

However, this data representation requires careful diligence if it is to be modeled by transformers. In transformers, the attention mask is the matrix that controls which inputs can be used to inform the predictions of which outputs. The pattern of using all prior tokens in a sequence to inform the prediction of an output leads to the upper triangular attention matrix traditionally found in causal decoders. However, since the sequence fed into the STUDY model is not temporally ordered, even though each of its constituent subsequences is, a standard causal decoder is no longer a good fit for this sequence. When trying to predict each token, the model is not allowed to attend to every token that precedes it in the sequence; some of these tokens might have timestamps that are later and contain information that would not be available at deployment time.

In this figure we show the attention mask typically used in causal decoders. Each column represents an output and each column represents an output. A value of 1 (shown as blue) for a matrix entry at a particular position denotes that the model can observe the input of that row when predicting the output of the corresponding column, whereas a value of 0 (shown as white) denotes the opposite.

The STUDY model builds on causal transformers by replacing the triangular matrix attention mask with a flexible attention mask with values based on timestamps to allow attention across different subsequences. Compared to a regular transformer, which would not allow attention across different subsequences and would have a triangular matrix mask within sequence, STUDY maintains a causal triangular attention matrix within a sequence and has flexible values across sequences with values that depend on timestamps. Hence, predictions at any output point in the sequence are informed by all input points that occurred in the past relative to the current time point, regardless of whether they appear before or after the current input in the sequence. This causal constraint is important because if it is not enforced at train time, the model could potentially learn to make predictions using information from the future, which would not be available for a real world deployment.

In (a) we show a sequential autoregressive transformer with causal attention that processes each user individually; in (b) we show an equivalent joint forward pass that results in the same computation as (a); and finally, in (c) we show that by introducing new nonzero values (shown in purple) to the attention mask we allow information to flow across users. We do this by allowing a prediction to condition on all interactions with an earlier timestamp, irrespective of whether the interaction came from the same user or not.

<!–

In (a) we show a sequential autoregressive transformer with causal attention that processes each user individually; in (b) we show an equivalent joint forward pass that results in the same computation as (a); and finally, in (c) we show that by introducing new nonzero values (shown in purple) to the attention mask we allow information to flow across users. We do this by allowing a prediction to condition on all interactions with an earlier timestamp, irrespective of whether the interaction came from the same user or not.

–><!–

In (a) we show a sequential autoregressive transformer with causal attention that processes each user individually; in (b) we show an equivalent joint forward pass that results in the same computation as (a); and finally, in (c) we show that by introducing new nonzero values (shown in purple) to the attention mask we allow information to flow across users. We do this by allowing a prediction to condition on all interactions with an earlier timestamp, irrespective of whether the interaction came from the same user or not.

–>

Experiments

We used the Learning Ally dataset to train the STUDY model along with multiple baselines for comparison. We implemented an autoregressive click-through rate transformer decoder, which we refer to as “Individual”, a k-nearest neighbor baseline (KNN), and a comparable social baseline, social attention memory network (SAMN). We used the data from the first school year for training and we used the data from the second school year for validation and testing.

We evaluated these models by measuring the percentage of the time the next item the user actually interacted with was in the model’s top n recommendations, i.e., hits@n, for different values of n. In addition to evaluating the models on the entire test set we also report the models’ scores on two subsets of the test set that are more challenging than the whole data set. We observed that students will typically interact with an audiobook over multiple sessions, so simply recommending the last book read by the user would be a strong trivial recommendation. Hence, the first test subset, which we refer to as “non-continuation”, is where we only look at each model’s performance on recommendations when the students interact with books that are different from the previous interaction. We also observe that students revisit books they have read in the past, so strong performance on the test set can be achieved by restricting the recommendations made for each student to only the books they have read in the past. Although there might be value in recommending old favorites to students, much value from recommender systems comes from surfacing content that is new and unknown to the user. To measure this we evaluate the models on the subset of the test set where the students interact with a title for the first time. We name this evaluation subset “novel”.

We find that STUDY outperforms all other tested models across almost every single slice we evaluated against.

In this figure we compare the performance of four models, Study, Individual, KNN and SAMN. We measure the performance with hits@5, i.e., how likely the model is to suggest the next title the user read within the model’s top 5 recommendations. We evaluate the model on the entire test set (all) as well as the novel and non-continuation splits. We see STUDY consistently outperforms the other three models presented across all splits.

Importance of appropriate grouping

At the heart of the STUDY algorithm is organizing users into groups and doing joint inference over multiple users who are in the same group in a single forward pass of the model. We conducted an ablation study where we looked at the importance of the actual groupings used on the performance of the model. In our presented model we group together all students who are in the same grade level and school. We then experiment with groups defined by all students in the same grade level and district and also place all students in a single group with a random subset used for each forward pass. We also compare these models against the Individual model for reference.

We found that using groups that were more localized was more effective, with the school and grade level grouping outperforming the district and grade level grouping. This supports the hypothesis that the STUDY model is successful because of the social nature of activities such as reading — people’s reading choices are likely to correlate with the reading choices of those around them. Both of these models outperformed the other two models (single group and Individual) where grade level is not used to group students. This suggests that data from users with similar reading levels and interests is beneficial for performance.

Future work

This work is limited to modeling recommendations for user populations where the social connections are assumed to be homogenous. In the future it would be beneficial to model a user population where relationships are not homogeneous, i.e., where categorically different types of relationships exist or where the relative strength or influence of different relationships is known.

Acknowledgements

This work involved collaborative efforts from a multidisciplinary team of researchers, software engineers and educational subject matter experts. We thank our co-authors: Diana Mincu, Lauren Harrell, and Katherine Heller from Google. We also thank our colleagues at Learning Ally, Jeff Ho, Akshat Shah, Erin Walker, and Tyler Bastian, and our collaborators at Google, Marc Repnyek, Aki Estrella, Fernando Diaz, Scott Sanner, Emily Salkey and Lev Proleev.

Read More

How Amazon Shopping uses Amazon Rekognition Content Moderation to review harmful images in product reviews

How Amazon Shopping uses Amazon Rekognition Content Moderation to review harmful images in product reviews

Customers are increasingly turning to product reviews to make informed decisions in their shopping journey, whether they’re purchasing everyday items like a kitchen towel or making major purchases like buying a car. These reviews have transformed into an essential source of information, enabling shoppers to access the opinions and experiences of other customers. As a result, product reviews have become a crucial aspect of any store, offering valuable feedback and insights to help inform purchase decisions.

Amazon has one of the largest stores with hundreds of millions of items available. In 2022, 125 million customers contributed nearly 1.5 billion reviews and ratings to Amazon stores, making online reviews at Amazon a solid source of feedback for customers. At the scale of product reviews submitted every month, it is essential to verify that these reviews align with Amazon Community Guidelines regarding acceptable language, words, videos, and images. This practice is in place to guarantee customers receive accurate information regarding the product, and to prevent reviews from including inappropriate language, offensive imagery, or any type of hate speech directed towards individuals or communities. By enforcing these guidelines, Amazon can maintain a safe and inclusive environment for all customers.

Content moderation automation allows Amazon to scale the process while keeping high accuracy. It’s a complex problem space with unique challenges and requiring different techniques for text, images, and videos. Images are a relevant component of product reviews, often providing a more immediate impact on customers than text. With Amazon Rekognition Content Moderation, Amazon is able to automatically detect harmful images in product reviews with higher accuracy, reducing reliance on human reviewers to moderate such content. Rekognition Content Moderation has helped to improve the well-being of human moderators and achieve significant cost savings.

Amazon Shopping with Rekognition

Moderation with self-hosted ML models

The Amazon Shopping team designed and implemented a moderation system that uses machine learning (ML) in conjunction with human-in-the-loop (HITL) review to ensure product reviews are about the customer experience with the product and don’t contain inappropriate or harmful content as per the community guidelines. The image moderation subsystem, as illustrated in the following diagram, utilized multiple self-hosted and self-trained computer vision models to detect images that violate Amazon guidelines. The decision handler determines the moderation action and provides reasons for its decision based on the ML models’ output, thereby deciding whether the image required a further review by a human moderator or could be automatically approved or rejected.

Overall architecture

With these self-hosted ML models, the team started by automating decisions on 40% of the images received as part of the reviews and continuously worked on improving the solution through the years while facing several challenges:

  • Ongoing efforts to improve automation rate – The team desired to improve the accuracy of ML algorithms, aiming to increase the automation rate. This requires continuous investments in data labeling, data science, and MLOps for models training and deployment.
  • System complexity – The architecture complexity requires investments in MLOps to ensure the ML inference process scales efficiently to meet the growing content submission traffic.

Replace self-hosted ML models with the Rekognition Content Moderation API

Amazon Rekognition is a managed artificial intelligence (AI) service that offers pre-trained models through an API interface for image and video moderation. It has been widely adopted by industries such as ecommerce, social media, gaming, online dating apps, and others to moderate user-generated content (UGC). This includes a range of content types, such as product reviews, user profiles, and social media post moderation.

Rekognition Content Moderation automates and streamlines image and video moderation workflows without requiring ML experience. Amazon Rekognition customers can process millions of images and videos, efficiently detecting inappropriate or unwanted content, with fully managed APIs and customizable moderation rules to keep users safe and the business compliant.

The team successfully migrated a subset of self-managed ML models in the image moderation system for nudity and not safe for work (NSFW) content detection to the Amazon Rekognition Detect Moderation API, taking advantage of the highly accurate and comprehensive pre-trained moderation models. With the high accuracy of Amazon Rekognition, the team has been able to automate more decisions, save costs, and simplify their system architecture.

Deployment diagram

Improved accuracy and expanded moderation categories

The implementation of the Amazon Rekognition image moderation API has resulted in higher accuracy for detection of inappropriate content. This implies that an additional approximate of 1 million images per year will be automatically moderated without the need for any human review.

Operational excellence

The Amazon Shopping team was able to simplify the system architecture, reducing the operational effort required to manage and maintain the system. This approach has saved them months of DevOps effort per year, which means they can now allocate their time to developing innovative features instead of spending it on operational tasks.

Cost reduction

The high accuracy from Rekognition Content Moderation has enabled the team to send fewer images for human review, including potentially inappropriate content. This has reduced the cost associated with human moderation and allowed moderators to focus their efforts on more high-value business tasks. Combined with the DevOps efficiency gains, the Amazon Shopping team achieved significant cost savings.

Conclusion

Migrating from self-hosted ML models to the Amazon Rekognition Moderation API for product review moderation can provide many benefits for businesses, including significant cost savings. By automating the moderation process, online stores can quickly and accurately moderate large volumes of product reviews, improving the customer experience by ensuring that inappropriate or spam content is quickly removed. Additionally, by using a managed service like the Amazon Rekognition Moderation API, companies can reduce the time and resources needed to develop and maintain their own models, which can be especially useful for businesses with limited technical resources. The API’s flexibility also allows online stores to customize their moderation rules and thresholds to fit their specific needs.

Learn more about content moderation on AWS and our content moderation ML use cases. Take the first step towards streamlining your content moderation operations with AWS.


About the Authors

Lana ZhangShipra Kanoria is a Principal Product Manager at AWS. She is passionate about helping customers solve their most complex problems with the power of machine learning and artificial intelligence. Before joining AWS, Shipra spent over 4 years at Amazon Alexa, where she launched many productivity-related features on the Alexa voice assistant.

Lana ZhangLuca Agostino Rubino is a Principal Software Engineer in the Amazon Shopping team. He works on Community features like Customer Reviews and Q&As, focusing through the years on Content Moderation and on scaling and automation of Machine Learning solutions.

Lana ZhangLana Zhang is a Senior Solutions Architect at AWS WWSO AI Services team, specializing in AI and ML for Content Moderation, Computer Vision, Natural Language Processing and Generative AI. With her expertise, she is dedicated to promoting AWS AI/ML solutions and assisting customers in transforming their business solutions across diverse industries, including social media, gaming, e-commerce, media, advertising & marketing.

Read More

Quality Control Patrol: Startup Builds Models for Detecting Vehicle Failure Patterns

Quality Control Patrol: Startup Builds Models for Detecting Vehicle Failure Patterns

When it comes to preserving profit margins, data scientists for vehicle and parts manufacturers are sitting in the driver’s seat.

Viaduct, which develops models for time-series inference, is helping enterprises harvest failure insights from the data captured on today’s connected cars. It does so by tapping into sensor data and making correlations.

The four-year-old startup, based in Menlo Park, Calif., offers a platform to detect anomalous patterns, track issues, and deploy failure predictions. This enables automakers and parts suppliers to get in front of problems with real-time data to reduce warranty claims, recalls and defects, said David Hallac, the founder and CEO of Viaduct.

“Viaduct has deployed on more than 2 million vehicles, helped avoid 500,000 hours of downtime and saved hundreds of millions of dollars in warranty costs across the industry,” he said.

The company relies on NVIDIA A100 Tensor Core GPUs and the NVIDIA Time Series Prediction Platform (TSPP) framework for training, tuning and deploying time-series models, which are used to forecast data.

Viaduct has deployed with more than five major manufacturers of passenger cars and commercial trucks, according to the company.

“Customers see it as a huge savings — the things that we are affecting are big in terms of profitability,” said Hallac. “It’s downtime impact, it’s warranty impact and it’s product development inefficiency.”

Viaduct is a member of NVIDIA Inception, a program that provides companies with technology support and AI platforms guidance.

How It Started: Research Hits the Road

Hallac’s path to Viaduct began at Stanford University. While he was a Ph.D. student there, Volkswagen came to the lab he was at with sensor data collected from more than 60 drivers over the course of several months and a research grant to explore uses.

The question the researchers delved into was how to understand the patterns and trends in the sizable body of vehicle data collected over months.

The Stanford researchers in coordination with Volkswagen Electronics Research Laboratory released a paper on the work, which highlighted Drive2Vec, a deep learning method for embedding sensor data.

“We developed a bunch of algorithms focused on structural inference from high-dimensional time-series data. We were discovering useful insights, and we were able to help companies train and deploy predictive algorithms at scale,” he said.

Developing a Knowledge Graph for Insights With up to 10x Inference

Viaduct handles time-series analytics with its TSI engine, which aggregates manufacturing, telematics and service data. Its model was trained with A100 GPUs tapping into NVIDIA TSPP.

“We describe it as a knowledge graph — we’re building this knowledge graph of all the different sensors and signals and how they correlate with each other,” Hallac said.

Several key features are generated using the Drive2Vec autoencoder for embedding sensor data. Correlations are learned via a Markov random field inference process, and the time series predictions tap into the NVIDIA TSPP framework.

NVIDIA GPUs on this platform enable Viaduct to achieve as much as a 30x better inference accuracy compared with CPU systems running logistics regression and gradient boosting algorithms, Hallac said.

Protecting Profits With Proactive AI

One vehicle maker using Viaduct’s platform was able to handle some of its issues proactively, fix them and then identify which vehicles were at risk of those issues and only request owners to bring those in for service. This not only affects the warranty claims but also the service desks, which get more visibility into the types of vehicle repairs coming in.

Also, as vehicle and parts manufacturers are partnered on warranties, the results matter for both.

Viaduct reduced warranty costs for one customer by more than $50 million on five issues, according to the startup.

“Everyone wants the information, everyone feels the pain and everyone benefits when the system is optimized,” Hallac said of the potential for cost-savings.

Maintaining Vehicle Reviews Ratings

Viaduct began working with a major automaker last year to help with quality-control issues. The partnership aimed to improve its time-to-identify and time-to-fix post-production quality issues.

The automaker’s JD Power IQS (Initial Quality Study) score had been falling while its warranty costs were climbing, and the company sought to reverse the situation. So, the automaker began using Viaduct’s platform and its TSI engine.

In A/B testing Viaduct’s platform against traditional reactive approaches to quality control, the automaker was able to identify issues on average 53 days earlier during the first year of a vehicle launch. The results saved “tens of millions” in warranty costs and the vehicle’s JD Power quality and reliability score increased “multiple points” compared with the previous model year, according to Hallac.

And Viaduct is getting customer traction that reflects the value of its AI to businesses, he said.

Learn more about NVIDIA A100 and NVIDIA TSPP.

 

 

Read More