Ready to Roll: Nuro to License Its Autonomous Driving System

Ready to Roll: Nuro to License Its Autonomous Driving System

To accelerate autonomous vehicle development and deployment timelines, Nuro announced today it will license its Nuro Driver autonomous driving system directly to automakers and mobility providers.

The Nuro Driver is built on NVIDIA’s end-to-end safety architecture, which includes NVIDIA GPUs for AI training in the cloud and an automotive-grade NVIDIA DRIVE Thor computer running the NVIDIA DriveOS operating system inside the vehicle.

The Nuro Driver has demonstrated its reliability and safety in real-world conditions with more than 1 million autonomous miles completed across its fleet of R&D vehicles and zero at-fault incidents.

“It’s not a question of if, but when L4 autonomy will become widespread,” said Jiajun Zhu, cofounder and CEO at Nuro. “We believe Nuro is positioned to be a major contributor to this autonomous future where people and goods mobility are free-flowing, representing a significant increase in the quality of life for everyone.”

The licensing of the Nuro Driver marks a significant step forward in bringing level 4 vehicles to market, accelerating the adoption of autonomous technology across the transportation industry.

An End-to-End Approach With NVIDIA DRIVE

Nuro announced at GTC in March that the Nuro Driver, which enables level 4 autonomous driving for multiple vehicle types, is being built on NVIDIA DRIVE Thor, running on the NVIDIA DriveOS operating system for safe, AI-defined autonomous vehicles.

DRIVE Thor integrates the NVIDIA Blackwell architecture, designed for transformer, large language models and generative AI workloads. Nuro also uses NVIDIA GPUs for AI training.

“Built with NVIDIA’s end-to-end safety AV architecture, the Nuro Driver can integrate sensor processing and other safety-critical capabilities, along with AI-driven autonomy, into a single, centralized computing system,” said Rishi Dhall, vice president of automotive at NVIDIA. “This enables the reliability and performance needed for safe deployment of autonomous vehicles at scale.”

The next-gen Nuro Driver will include safety features such as microphones for siren detection and systems for removing dirt from sensors as well as redundancy in safety-critical systems.

Advantages of Licensing

Nuro’s licensing model will offer automotive manufacturers and mobility companies access to a commercially independent, road-proven platform that can accelerate their autonomous vehicle development and deployment timelines.

With a focus on advancing autonomy, Nuro is poised to help shape the future of transportation by driving industry-wide adoption and commercialization of autonomous technology across a broad range of vehicles and mobility applications.

Test Area Expansion

Nuro this summer received approval from the California Department of Motor Vehicles to test its driverless vehicles based on the Nuro Driver in four San Francisco Bay Area cities: Los Altos, Menlo Park, Mountain View and Palo Alto.

The DMV permit allows Nuro vehicles to travel at any time of the day, as well as in light rain and light to moderate fog conditions.

Nuro is also conducting commercial testing and delivery services in Houston.

Read More

NVIDIA and Oracle to Accelerate AI and Data Processing for Enterprises

NVIDIA and Oracle to Accelerate AI and Data Processing for Enterprises

Enterprises are looking for increasingly powerful compute to support their AI workloads and accelerate data processing. The efficiency gained can translate to better returns for their investments in AI training and fine-tuning, and improved user experiences for AI inference.

At the Oracle CloudWorld conference today, Oracle Cloud Infrastructure (OCI) announced the first zettascale OCI Supercluster, accelerated by the NVIDIA Blackwell platform, to help enterprises train and deploy next-generation AI models using more than 100,000 of NVIDIA’s latest-generation GPUs.

OCI Superclusters allow customers to choose from a wide range of NVIDIA GPUs and deploy them anywhere: on premises, public cloud and sovereign cloud. Set for availability in the first half of next year, the Blackwell-based systems can scale up to 131,072 Blackwell GPUs with NVIDIA ConnectX-7 NICs for RoCEv2 or NVIDIA Quantum-2 InfiniBand networking to deliver an astounding 2.4 zettaflops of peak AI compute to the cloud. (Read the press release to learn more about OCI Superclusters.)

At the show, Oracle also previewed NVIDIA GB200 NVL72 liquid-cooled bare-metal instances to help power generative AI applications. The instances are capable of large-scale training with Quantum-2 InfiniBand and real-time inference of trillion-parameter models within the expanded 72-GPU NVIDIA NVLink domain, which can act as a single, massive GPU.

This year, OCI will offer NVIDIA HGX H200 — connecting eight NVIDIA H200 Tensor Core GPUs in a single bare-metal instance via NVLink and NVLink Switch, and scaling to 65,536 H200 GPUs with NVIDIA ConnectX-7 NICs over RoCEv2 cluster networking. The instance is available to order for customers looking to deliver real-time inference at scale and accelerate their training workloads. (Read a blog on OCI Superclusters with NVIDIA B200, GB200 and H200 GPUs.)

OCI also announced general availability of NVIDIA L40S GPU-accelerated instances for midrange AI workloads, NVIDIA Omniverse and visualization. (Read a blog on OCI Superclusters with NVIDIA L40S GPUs.)

For single-node to multi-rack solutions, Oracle’s edge offerings provide scalable AI at the edge accelerated by NVIDIA GPUs, even in disconnected and remote locations. For example, smaller-scale deployments with Oracle’s Roving Edge Device v2 will now support up to three NVIDIA L4 Tensor Core GPUs.

Companies are using NVIDIA-powered OCI Superclusters to drive AI innovation. Foundation model startup Reka, for example, is using the clusters to develop advanced multimodal AI models to develop enterprise agents.

“Reka’s multimodal AI models, built with OCI and NVIDIA technology, empower next-generation enterprise agents that can read, see, hear and speak to make sense of our complex world,” said Dani Yogatama, cofounder and CEO of Reka. “With NVIDIA GPU-accelerated infrastructure, we can handle very large models and extensive contexts with ease, all while enabling dense and sparse training to scale efficiently at cluster levels.”

Accelerating Generative AI Oracle Database Workloads

Oracle Autonomous Database is gaining NVIDIA GPU support for Oracle Machine Learning notebooks to allow customers to accelerate their data processing workloads on Oracle Autonomous Database.

At Oracle CloudWorld, NVIDIA and Oracle are partnering to demonstrate three capabilities that show how the NVIDIA accelerated computing platform could be used today or in the future to accelerate key components of generative AI retrieval-augmented generation pipelines.

The first will showcase how NVIDIA GPUs can be used to accelerate bulk vector embeddings directly from within Oracle Autonomous Database Serverless to efficiently bring enterprise data closer to AI. These vectors can be searched using Oracle Database 23ai’s AI Vector Search.

The second demonstration will showcase a proof-of-concept prototype that uses NVIDIA GPUs, NVIDIA RAPIDS cuVS and an Oracle-developed offload framework to accelerate vector graph index generation, which significantly reduces the time needed to build indexes for efficient vector searches.

The third demonstration illustrates how NVIDIA NIM, a set of easy-to-use inference microservices, can boost generative AI performance for text generation and translation use cases across a range of model sizes and concurrency levels.

Together, these new Oracle Database capabilities and demonstrations highlight how NVIDIA GPUs can be used to help enterprises bring generative AI to their structured and unstructured data housed in or managed by an Oracle Database.

Sovereign AI Worldwide

NVIDIA and Oracle are collaborating to deliver sovereign AI infrastructure worldwide, helping address the data residency needs of governments and enterprises.

Brazil-based startup Wide Labs trained and deployed Amazonia IA, one of the first large language models for Brazilian Portuguese, using NVIDIA H100 Tensor Core GPUs and the NVIDIA NeMo framework in OCI’s Brazilian data centers to help ensure data sovereignty.

“Developing a sovereign LLM allows us to offer clients a service that processes their data within Brazilian borders, giving Amazônia a unique market position,” said Nelson Leoni, CEO of Wide Labs. “Using the NVIDIA NeMo framework, we successfully trained Amazônia IA.”

In Japan, Nomura Research Institute, a leading global provider of consulting services and system solutions, is using OCI’s Alloy infrastructure with NVIDIA GPUs to enhance its financial AI platform with LLMs operating in accordance with financial regulations and data sovereignty requirements.

Communication and collaboration company Zoom will be using NVIDIA GPUs in OCI’s Saudi Arabian data centers to help support compliance with local data requirements.

And geospatial modeling company RSS-Hydro is demonstrating how its flood mapping platform — built on the NVIDIA Omniverse platform and powered by L40S GPUs on OCI — can use digital twins to simulate flood impacts in Japan’s Kumamoto region, helping mitigate the impact of climate change.

These customers are among numerous nations and organizations building and deploying domestic AI applications powered by NVIDIA and OCI, driving economic resilience through sovereign AI infrastructure.

Enterprise-Ready AI With NVIDIA and Oracle

Enterprises can accelerate task automation on OCI by deploying NVIDIA software such as NIM microservices and NVIDIA cuOpt with OCI’s scalable cloud solutions. These solutions enable enterprises to quickly adopt generative AI and build agentic workflows for complex tasks like code generation and route optimization.

NVIDIA cuOpt, NIM, RAPIDS and more are included in the NVIDIA AI Enterprise software platform, available on the Oracle Cloud Marketplace.

Learn More at Oracle CloudWorld 

Join NVIDIA at Oracle CloudWorld 2024 to learn how the companies’ collaboration is bringing AI and accelerated data processing to the world’s organizations.

Register to the event to watch sessions, see demos and join Oracle and NVIDIA for the solution keynote, “Unlock AI Performance with NVIDIA’s Accelerated Computing Platform” (SOL3866), on Wednesday, Sept. 11, in Las Vegas.

Read More

AI on the Air: Behind the Scenes at IBC With Holoscan for Media

AI on the Air: Behind the Scenes at IBC With Holoscan for Media

AI is transforming the broadcast industry by enhancing the way content is created, distributed and consumed — but integrating the technology can be challenging.

Launched this week in limited availability, NVIDIA Holoscan for Media is a software-defined, AI-enabled platform that helps developers easily integrate AI into their live media applications and allows media companies to run live media pipelines on the same infrastructure as AI.

NVIDIA RTX AI workstations and PCs, powered by NVIDIA GPUs for real-time graphics processing and AI computing, provide an ideal foundation for developing these applications.

At the IBC broadcast and media tech show in Amsterdam, NVIDIA partners including Adobe, Blackmagic Design and Topaz Labs will showcase the latest RTX AI-powered video editing tools and technologies powering live media advancements.

NVIDIA Holoscan for Media: Building the Future of Live Production

NVIDIA Holoscan for Media is an AI-enabled, software-defined platform for live media.

Building a robust AI software stack for application development in live media is an intricate process that requires substantial expertise and resources.

This technical complexity, coupled with the need for large amounts of high-quality data and the difficulty of scaling pilot programs to production-level performance, often prevents these initiatives from reaching full deployment. Additionally, traditional development of software is tied to dedicated hardware, further limiting innovation and making upgrades cumbersome.

Addressing these challenges, NVIDIA Holoscan for Media empowers developers to create cutting-edge AI  applications for live media with ease through its seamless integration with NVIDIA’s extensive suite of AI software development kits (SDKs). This allows developers to easily incorporate advanced AI capabilities into their applications so they can focus on creating more sophisticated and intelligent media applications. Media companies can then seamlessly connect those applications to live video pipelines running on top of the platform.

Another typical challenge in live media application development is inefficiency in deployment. Developers often find themselves needing to create separate builds for different deployment types, whether on premises, in the cloud or at the edge. This increases costs and can extend development timelines. Developers must also allocate resources to build additional infrastructure services, such as authentication and timing protocols, further straining budgets.

Holoscan for Media’s cloud-native architecture enables applications to run from anywhere. Applications developed for the cloud, edge or on-premises deployments can run across environments, eliminating the need for separate builds.

Holoscan for Media is available on premises today, with cloud and edge deployments coming soon. The platform also includes Precision Time Protocol for audio-video synchronization in live broadcasts and Networked Media Open Specifications for seamless communication between applications — simplifying the management of complex systems.

Enhancing Development With RTX AI PCs and Workstations

NVIDIA RTX AI PCs and workstations complement the potential of Holoscan for Media by offering a robust foundation for developing immersive media experiences.

The CUDA ecosystem available on RTX AI PCs and workstations offers access to a vast array of NVIDIA SDKs and tools optimized for media and AI workloads. This allows developers to build applications that can seamlessly transition from workstation to deployment environments, ensuring that their creations are both robust and scalable.

NVIDIA AI Enterprise offers further enhancements by putting a comprehensive suite of AI software, tools and frameworks optimized for NVIDIA GPUs into the hands of enterprise developers who require secure, stable and scalable production environments for AI applications. This enterprise-grade AI platform includes popular frameworks like TensorFlow, PyTorch and RAPIDS for streamlined deployment.

Using NVIDIA AI Enterprise, developers can build advanced AI capabilities such as computer vision, natural language processing and recommendation systems directly in their media applications. And they can prototype, test and deploy sophisticated AI models within their media workflows.

Video Editors and Enthusiasts — Rejoice! 

Holoscan for Media will be on display at IBC, running Sept. 13-16. At the Dell Technologies booth 7.A45, attendees can witness live demonstrations that showcase how to seamlessly transition from application development to live deployment.

A number of NVIDIA partners will spotlight their latest RTX AI-powered video editing tools and technologies at the show.

Blackmagic Design’s DaVinci Resolve 19 Studio is now available, introducing AI features that streamline editing workflows:

  • IntelliTrack AI makes it fast and easy to stabilize footage during the editing process. It can be used in DaVinci Resolve’s Fairlight tool to track on-screen subjects and automatically generate audio panning as they move across 2D and 3D spaces. With the AI-powered feature, editors can quickly pan or move audio across the stereo field, controlling the voice positions of multiple actors in the mix environment.
  • UltraNR is an AI-accelerated denoise mode in DaVinci Resolve’s spatial noise reduction palette. Editors can use it to dramatically reduce digital noise — undesired color or luminance fluctuations that obscure detail — from a frame while maintaining image clarity. Editors can also combine the tool with temporal noise reduction for even more effective denoising in images with motion, where fluctuations can be more noticeable.
  • RTX Video Super Resolution uses AI to sharpen low-resolution video. It can detect and remove compression artifacts, greatly enhancing lower-quality video.
  • RTX Video HDR uses an AI-enhanced algorithm to remap standard dynamic range video into vibrant HDR10 color spaces. This lets video editors create high dynamic range content even if they don’t have cameras capable of recording in HDR.

IntelliTrack and UltraNR get a performance boost when running on NVIDIA RTX PCs and workstations. NVIDIA TensorRT lets them run up to 3x faster on a GeForce GTX 4090 laptop than a Macbook Pro M3 Max.

All DaVinci Resolve AI effects are accelerated on RTX GPUs by TensorRT. The Resolve update includes GPU acceleration for its Beauty, Edge Detect and Watercolor effects, doubling their performance on NVIDIA GPUs.

The update also introduces NVIDIA’s H.265 Ultra-High-Quality (UHQ) mode, which utilizes NVENC to boost HEVC encoding efficiency by 10%.

Pixel-Perfect Partners: Topaz Video AI and Adobe After Effects

This year, Topaz Labs introduced an Adobe After Effects plug-in for Video AI, a leading solution for video upscaling and frame interpolation. The plug-in integrates the full range of enhancement and frame interpolation models directly into the industry-standard motion graphics software.

It also allows users to access AI upscaling tools in their After Effects compositions, providing greater flexibility and faster compositing without the need to transfer large files between different tools.

A standout feature of Topaz Video AI is its ability to create dramatic slow-motion videos with Topaz’s Apollo AI model, which can convert footage to up to 16x slow motion.

Topaz Video AI’s Apollo model in action — slowing footage down by up to 16x using frame interpolation for breathtaking detail.

The plugin also excels at upscaling, ideal for integrating low-resolution assets into larger projects without compromising quality. It includes all of Topaz’s enhancement models, like the Rhea model for 4x upscaling. Check out Adobe’s blog to learn more about After Effects plug-ins and how to use them.

Built for speed, the plug-in is accelerated on RTX GPUs by NVIDIA TensorRT, boosting AI performance by up to 70%. A future update to Video AI will introduce further TensorRT performance improvements and efficiency optimizations, including a significant reduction in the number of AI model files required as part of the app installation.

With the rapid integration of AI, the future of broadcasting is brighter and more innovative than ever.

Read More

Live Media Reimagined: NVIDIA Holoscan for Media Now Available for Production

Live Media Reimagined: NVIDIA Holoscan for Media Now Available for Production

Companies in broadcast, sports and streaming are transitioning to software-defined infrastructure to benefit from flexible deployment and to more easily adopt the latest AI technologies.

NVIDIA Holoscan for Media, now in limited availability, is an AI-enabled, software-defined platform that allows live media and video pipelines to run on the same infrastructure as AI. This enables companies with live media pipelines to use applications from an ecosystem of developers on repurposable, NVIDIA-accelerated, commercial off-the-shelf hardware to enhance production and delivery.

Holoscan for Media offers a unified platform for live media applications from established and emerging vendors, covering AI captioning, live graphics, vision mixing, playout server, encode, decode, transcode, multiviewer and Networked Media Open Specifications (NMOS) controller, with more being made available in coming months.

Developers can use Holoscan for Media to simplify the development process, streamline delivery to customers and integrate emerging technologies — all while optimizing R&D spend.

Holoscan for Media is an internet protocol-based platform built on industry standards, such as ST 2110, and common application programming interfaces, meeting the strictest density and compliance requirements. It’s inclusive of essential services like Precision Time Protocol, aka PTP, and NMOS for interoperability and manageability, and equipped to perform in the high-pressure production environments that comprise live broadcast.

Industry Adoption of NVIDIA Holoscan for Media

Companies with live media pipelines are embracing software-defined infrastructure as they transition to the next phase of live media production and delivery. And the ecosystem of partners who share this vision for the future of the industry, including Beamr, Harmonic, Pebble, Qvest, RAVEL, RT Software, Speechmatics and Spicy Mango continues to grow.

“The Holoscan for Media platform leverages the powerful integration of live video and AI. This integration, accelerated by NVIDIA computing, aligns naturally with Beamr’s advanced video technology and products,” said Sharon Carmel, CEO of Beamr. “We are confident that our Holoscan for Media application will significantly enhance media pipelines performance by optimizing 4K p60 Live video streams with high efficiency.”

“NVIDIA is laying the foundation for software-defined broadcast, enhancing live media with expansive compute capabilities and a developer-friendly ecosystem,” said Christophe Ponsart, executive vice president and generative AI practice co-lead of Qvest, a global leader in technology and business consulting. “This level of local compute, alongside NVIDIA’s powerful developer tools, empowers Qvest as a technology partner and integrator to rapidly innovate, using our deep industry expertise and customer relationships to make a meaningful impact.”

“NVIDIA Holoscan for Media, using the power of Red Hat OpenShift, delivers a scalable, cloud-native platform for next-generation live media applications,” said Gino Grano, global vice president of Americas, telco, media and entertainment at Red Hat, the industry-leading Kubernetes-powered hybrid cloud platform. “With this enterprise-grade open-source solution, cable and broadcast companies can benefit from more seamless deployments and management of media applications, delivering enhanced flexibility and performance across environments.”

“Speechmatics is delighted to extend our collaboration with NVIDIA and become the first speech-to-text provider on Holoscan for Media,” said David Agmen-Smith, director of product at Speechmatics, a leading provider of speech AI technology. “The combination allows lightning-quick and highly accurate captions to be broadcast with incredible ease.”

Get Started

Take advantage of flexible deployment, resource scalability and the latest video, predictive and generative AI technologies by transitioning to true software-defined infrastructure with Holoscan for Media.

Attendees of the IBC 2024 content and technology event, running Sept. 13-16 in Amsterdam, can see Holoscan for Media in action across the show floor.

See notice regarding software product information.

Read More

How AI Is Personalizing Customer Service Experiences Across Industries

How AI Is Personalizing Customer Service Experiences Across Industries

Customer service departments across industries are facing increased call volumes, high customer service agent turnover, talent shortages and shifting customer expectations.

Customers expect both self-help options and real-time, person-to-person support. These expectations for seamless, personalized experiences extend across digital communication channels, including live chat, text and social media.

Despite the rise of digital channels, many consumers still prefer picking up the phone for support, placing strain on call centers. As companies strive to enhance the quality of customer interactions, operational efficiency and costs remain a significant concern.

To address these challenges, businesses are deploying AI-powered customer service software to boost agent productivity, automate customer interactions and harvest insights to optimize operations.

In nearly every industry, AI systems can help improve service delivery and customer satisfaction. Retailers are using conversational AI to help manage omnichannel customer requests, telecommunications providers are enhancing network troubleshooting, financial institutions are automating routine banking tasks, and healthcare facilities are expanding their capacity for patient care.

What Are the Benefits of AI for Customer Service?

With strategic deployment of AI, enterprises can transform customer interactions through intuitive problem-solving to build greater operational efficiencies and elevate customer satisfaction.

By harnessing customer data from support interactions, documented FAQs and other enterprise resources, businesses can develop AI tools that tap into their organization’s unique collective knowledge and experiences to deliver personalized service, product recommendations and proactive support.

Customizable, open-source generative AI technologies such as large language models (LLMs), combined with natural language processing (NLP) and retrieval-augmented generation (RAG), are helping industries accelerate the rollout of use-case-specific customer service AI. According to McKinsey, over 80% of customer care executives are already investing in AI or planning to do so soon.

With cost-efficient, customized AI solutions, businesses are automating management of help-desk support tickets, creating more effective self-service tools and supporting their customer service agents with AI assistants. This can significantly reduce operational costs and improve the customer experience.

Developing Effective Customer Service AI

For satisfactory, real-time interactions, AI-powered customer service software must return accurate, fast and relevant responses. Some  tricks of the trade include:

Open-source foundation models can fast-track AI development. Developers can flexibly adapt and enhance these pretrained machine learning models, and enterprises can use them to launch AI projects without the high costs of building models from scratch.

RAG frameworks connect foundation or general-purpose LLMs to proprietary knowledge bases and data sources, including inventory management and customer relationship management systems and customer service protocols. Integrating RAG into conversational chatbots, AI assistants and copilots tailors responses to the context of customer queries.

Human-in-the-loop processes remain crucial to both AI training and live deployments. After initial training of foundation models or LLMs, human reviewers should judge the AI’s responses and provide corrective feedback. This helps to guard against issues such as hallucination —  where the model generates false or misleading information, and other errors including toxicity or off-topic responses. This type of human involvement ensures fairness, accuracy and security is fully considered during AI development.

Human participation is even more important for AI in production. When an AI is unable to adequately resolve a customer question, the program must be able to route the call to customer support teams. This collaborative approach between AI and human agents ensures that customer engagement is efficient and empathetic.

What’s the ROI of Customer Service AI?   

The return on investment of customer service AI should be measured primarily based on efficiency gains and cost reductions. To quantify ROI, businesses can measure key indicators such as reduced response times, decreased operational costs of contact centers, improved customer satisfaction scores and revenue growth resulting from AI-enhanced services.

For instance, the cost of implementing an AI chatbot using open-source models can be compared with the expenses incurred by routing customer inquiries through traditional call centers. Establishing this baseline helps assess the financial impact of AI deployments on customer service operations.

To solidify understanding of ROI before scaling AI deployments, companies can consider a pilot period. For example, by redirecting 20% of call center traffic to AI solutions for one or two quarters and closely monitoring the outcomes, businesses can obtain concrete data on performance improvements and cost savings. This approach helps prove ROI and informs decisions for further investment.

Businesses across industries are using AI for customer service and measuring their success:

Retailers Reduce Call Center Load 

Modern shoppers expect smooth, personalized and efficient shopping experiences, whether in store or on an e-commerce site. Customers of all generations continue prioritizing live human support, while also desiring the option to use different channels. But complex customer issues coming from a diverse customer base can make it difficult for support agents to quickly comprehend and resolve incoming requests.

To address these challenges, many retailers are turning to conversational AI and AI-based call routing. According to NVIDIA’s 2024 State of AI in Retail and CPG report, nearly 70% of retailers believe that AI has already boosted their annual revenue.

CP All, Thailand’s sole licensed operator for 7-Eleven convenience stores, has implemented conversational AI chatbots in its call centers, which rack up more than 250,000 calls per day. Training the bots presented unique challenges due to the complexities of the Thai language, which includes 21 consonants, 18 pure vowels, three diphthongs and five tones.

To manage this, CP All used NVIDIA NeMo, a framework designed for building, training and fine-tuning GPU-accelerated speech and natural language understanding models. With automatic speech recognition and NLP models powered by NVIDIA technologies, CP All’s chatbot achieved a 97% accuracy rate in understanding spoken Thai.

With the conversational chatbot handling a significant number of customer conversations, the call load on human agents was reduced by 60%. This allowed customer service teams to focus on more complex tasks. The chatbot also helped reduce wait times and provided quicker, more accurate responses, leading to higher customer satisfaction levels.

With AI-powered support experiences, retailers can enhance customer retention, strengthen brand loyalty and boost sales.

Telecommunications Providers Automate Network Troubleshooting

Telecommunications providers are challenged to address complex network issues while adhering to service-level agreements with end customers for network uptime. Maintaining network performance requires rapid troubleshooting of network devices, pinpointing root causes and resolving difficulties at network operations centers.

With its abilities to analyze vast amounts of data, troubleshoot network problems autonomously and execute numerous tasks simultaneously, generative AI is ideal for network operations centers. According to an IDC survey, 73% of global telcos have prioritized AI and machine learning investments for operational support as their top transformation initiative, underscoring the industry’s shift toward AI and advanced technologies.

Infosys, a leader in next-generation digital services and consulting, has built AI-driven solutions to help its telco partners overcome customer service challenges. Using NVIDIA NIM microservices and RAG, Infosys developed an AI chatbot to support network troubleshooting.

By offering quick access to essential, vendor-agnostic router commands for diagnostics and monitoring, the generative AI-powered chatbot significantly reduces network resolution times, enhancing overall customer support experiences.

To ensure accuracy and contextual responses, Infosys trained the generative AI solution on telecom device-specific manuals, training documents and troubleshooting guides. Using NVIDIA NeMo Retriever to query enterprise data, Infosys achieved 90% accuracy for its LLM output. By fine-tuning and deploying models with NVIDIA technologies, Infosys achieved a latency of 0.9 seconds, a 61% reduction compared with its baseline model. The RAG-enabled chatbot powered by NeMo Retriever also attained 92% accuracy, compared with the baseline model’s 85%.

With AI tools supporting network administrators, IT teams and customer service agents, telecom providers can more efficiently identify and resolve network issues.

Financial Services Institutions Pinpoint Fraud With Ease

While customers expect anytime, anywhere banking and support, financial services require a heightened level of data sensitivity. And unlike other industries that may include one-off purchases, banking is typically based on ongoing transactions and long-term customer relationships.

At the same time, user loyalty can be fleeting, with up to 80% of banking customers willing to switch institutions for a better experience. Financial institutions must continuously improve their support experiences and update their analyses of customer needs and preferences.

Many banks are turning to AI virtual assistants that can interact directly with customers to manage inquiries, execute transactions and escalate complex issues to human customer support agents. According to NVIDIA’s 2024 State of AI in Financial Services report, more than one-fourth of survey respondents are using AI to enhance customer experiences, and 34% are exploring the use of generative AI and LLMs for customer experience and engagement.

Bunq, a European digital bank with more than 2 million customers and 8 billion euros worth of deposits, is deploying generative AI to meet user needs. With proprietary LLMs, the company built Finn, a personal AI assistant available to both customers and bank employees. Finn can answer finance-related inquiries such as “How much did I spend on groceries last month?” or “What is the name of the Indian restaurant I ate at last week?”

Plus, with a human-in-the-loop process, Finn helps employees more quickly identify fraud. By collecting and analyzing data for compliance officers to review, bunq now identifies fraud in just three to seven minutes, down from 30 minutes without Finn.

By deploying AI tools that can use data to protect customer transactions, execute banking requests and act on customer feedback, financial institutions can serve customers at a higher level, building the trust and satisfaction necessary for long-term relationships.

Healthcare and Life Sciences Organizations Overcome Staffing Shortages

In healthcare, patients need quick access to medical expertise, precise and tailored treatment options, and empathetic interactions with healthcare professionals. But with the World Health Organization estimating a 10 million personnel shortage by 2030, access to quality care could be jeopardized.

AI-powered digital healthcare assistants are helping medical institutions do more with less. With LLMs trained on specialized medical corpuses, AI copilots can save physicians and nurses hours of daily work by helping with clinical note-taking, automating order-placing for prescriptions and lab tests, and following up with after-visit patient notes.

Multimodal AI that combines language and vision models can make healthcare settings safer by extracting insights and providing summaries of image data for patient monitoring. For example, such technology can alert staff of patient fall risks and other patient room hazards.

To support healthcare professionals, Hippocratic AI has trained a generative AI healthcare agent to perform low-risk, non-diagnostic routine tasks, like reminding patients of necessary appointment prep and following up after visits to make sure medication routines are being followed and no adverse side effects are being experienced.

Hippocratic AI trained its models on evidence-based medicine and completed rigorous testing with a large group of certified nurses and doctors. The constellation architecture of the solution comprises 20 models, one of which communicates with patients while the other 19 supervise its output. The complete system contains 1.7 trillion parameters.

The possibility of every doctor and patient having their own AI-powered digital healthcare assistant means reduced clinician burnout and higher-quality medical care.

Raising the Bar for Customer Experiences With AI 

By integrating AI into customer service interactions, businesses can offer more personalized, efficient and prompt service, setting new standards for omnichannel support experiences across platforms. With AI virtual assistants that process vast amounts of data in seconds, enterprises can equip their support agents to deliver tailored responses to the complex needs of a diverse customer base.

To develop and deploy effective customer service AI, businesses can fine-tune AI models and deploy RAG solutions to meet diverse and specific needs.

NVIDIA offers a suite of tools and technologies to help enterprises get started with customer service AI.

NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, accelerate generative AI deployment and support various optimized AI models for seamless, scalable inference. NVIDIA NIM Agent Blueprints provide developers with packaged reference examples to build innovative solutions for customer service applications.

By taking advantage of AI development tools, enterprises can build accurate and high-speed AI applications to transform employee and customer experiences.

Learn more about improving customer service with generative AI.

Read More

Three Ways to Ride the Flywheel of Cybersecurity AI

Three Ways to Ride the Flywheel of Cybersecurity AI

The business transformations that generative AI brings come with risks that AI itself can help secure in a kind of flywheel of progress.

Companies who were quick to embrace the open internet more than 20 years ago were among the first to reap its benefits and become proficient in modern network security.

Enterprise AI is following a similar pattern today. Organizations pursuing its advances — especially with powerful generative AI capabilities — are applying those learnings to enhance their security.

For those just getting started on this journey, here are ways to address with AI three of the top security threats industry experts have identified for large language models (LLMs).

AI Guardrails Prevent Prompt Injections

Generative AI services are subject to attacks from malicious prompts designed to disrupt the LLM behind it or gain access to its data. As the report cited above notes, “Direct injections overwrite system prompts, while indirect ones manipulate inputs from external sources.”

The best antidote for prompt injections are AI guardrails, built into or placed around LLMs. Like the metal safety barriers and concrete curbs on the road, AI guardrails keep LLM applications on track and on topic.

The industry has delivered and continues to work on solutions in this area. For example, NVIDIA NeMo Guardrails software lets developers protect the trustworthiness, safety and security of generative AI services.

AI Detects and Protects Sensitive Data

The responses LLMs give to prompts can on occasion reveal sensitive information. With multifactor authentication and other best practices, credentials are becoming increasingly complex, widening the scope of what’s considered sensitive data.

To guard against disclosures, all sensitive information should be carefully removed or obscured from AI training data. Given the size of datasets used in training, it’s hard for humans — but easy for AI models — to ensure a data sanitation process is effective.

An AI model trained to detect and obfuscate sensitive information can help safeguard against revealing anything confidential that was inadvertently left in an LLM’s training data.

Using NVIDIA Morpheus, an AI framework for building cybersecurity applications, enterprises can create AI models and accelerated pipelines that find and protect sensitive information on their networks. Morpheus lets AI do what no human using traditional rule-based analytics can: track and analyze the massive data flows on an entire corporate network.

AI Can Help Reinforce Access Control

Finally, hackers may try to use LLMs to get access control over an organization’s assets. So, businesses need to prevent their generative AI services from exceeding their level of authority.

The best defense against this risk is using the best practices of security-by-design. Specifically, grant an LLM the least privileges and continuously evaluate those permissions, so it can only access the tools and data it needs to perform its intended functions. This simple, standard approach is probably all most users need in this case.

However, AI can also assist in providing access controls for LLMs. A separate inline model can be trained to detect privilege escalation by evaluating an LLM’s outputs.

Start the Journey to Cybersecurity AI

No one technique is a silver bullet; security continues to be about evolving measures and countermeasures. Those who do best on that journey make use of the latest tools and technologies.

To secure AI, organizations need to be familiar with it, and the best way to do that is by deploying it in meaningful use cases. NVIDIA and its partners can help with full-stack solutions in AI, cybersecurity and cybersecurity AI.

Looking ahead, AI and cybersecurity will be tightly linked in a kind of virtuous cycle, a flywheel of progress where each makes the other better. Ultimately, users will come to trust it as just another form of automation.

Learn more about NVIDIA’s cybersecurity AI platform and how it’s being put to use. And listen to cybersecurity talks from experts at the NVIDIA AI Summit in October.

Read More

19 New Games to Drop for GeForce NOW in September

19 New Games to Drop for GeForce NOW in September

Fall will be here soon, so leaf it to GeForce NOW to bring the games, with 19 joining the cloud in September.

Get started with the seven games available to stream this week, and a day one PC Game Pass title, Age of Mythology: Retold, from the creators of the award-winning Age of Empires franchise World’s Edge, Forgotten Empires and Xbox Game Studios.

The Open Beta for Call of Duty: Black Ops 6 runs Sept. 6-9, offering everyone a chance to experience game-changing innovations before the title officially launches on Oct. 25. Members can stream the Battle.net and Steam versions of the Open Beta instantly this week on GeForce NOW to jump right into the action.

Where Myths and Heroes Collide

Age of Mythology on GeForce NOW
A vast, mythical world to explore with friends? Say no more…

Age of Mythology: Retold revitalizes the classic real-time strategy game by merging its beloved elements with modern visuals.

Get immersed in a mythical universe, command legendary units and call upon the powers of various gods from the Atlantean, Greek, Egyptian and Norse pantheons. The single-player experience features a 50-mission campaign, including engaging battles and myth exploration in iconic locations like Troy and Midgard. Challenge friends in head-to-head matches or cooperate to take on advanced, AI-powered opponents.

Call upon the gods from the cloud with an Ultimate and Priority membership and stream the game across devices. Games update automatically in the cloud, so members can dive into the action without having to wait.

September Gets Better With New Games

The Casting of Frank Stone on GeForce NOW
Choose your fate.

Catch the storytelling prowess of Supermassive Games in The Casting of Frank Stone, available to stream this week for members. The shadow of Frank Stone looms over Cedar Hills, a town forever altered by his violent past. Delve into the mystery of Cedar Hills alongside an original cast of characters bound together on a twisted journey where nothing is quite as it seems. Every decision shapes the story and impacts the fate of the characters.

In addition, members can look for the following games this week:

  • The Casting of Frank Stone (New release on Steam, Sept. 3)
  • Age of Mythology (New release on Steam and Xbox, available on PC Game Pass, Sept.4 )
  • Sniper Ghost Warrior Contracts  (New release on Epic Games Store, early access Sept. 5)
  • Warhammer 40,000: Space Marine 2 (New release on Steam, early access Sept. 5)
  • Crime Scene Cleaner (Steam)
  • FINAL FANTASY XVI Demo (Epic Games Store)
  • Sins of a Solar Empire II (Steam)

Here’s what members can expect for the rest of September:

  • Frostpunk 2 (New release on Steam and Xbox available  on PC Game Pass, Sept. 17)
  • FINAL FANTASY XVI (New release on Steam and Epic Games Store, Sept. 17)
  • The Plucky Squire (New release on Steam, Sept. 17)
  • Tiny Glade (New release on Steam, Sept. 23)
  • Disney Epic Mickey: Rebrushed (New release on Steam, Sept. 24)
  • Greedfall II: The Dying World (New release on Steam, Sept. 24)
  • Mechabellum ( Steam)
  • Blacksmith Master (New release on Steam, Sept. 26)
  • Breachway (New release on Steam, Sept. 26)
  • REKA (New release on Steam)
  • Test Drive Unlimited Solar Crown (New release on Steam)
  • Rider’s Republic (New release on PC Game Pass, Sept. 11). To begin playing, members need to activate access, and can refer to the help article for instructions.

Additions to August

In addition to the 18 games announced last month, 48 more joined the GeForce NOW library:

  • Prince of Persia: The Lost Crown (Day zero release on Steam, Aug. 8)
  • FINAL FANTASY XVI Demo (New release on Steam, Aug. 19)
  • Black Myth: Wukong (New release on Steam and Epic Games Store, Aug. 20)
  • GIGANTIC: RAMPAGE EDITION (Available on Epic Games Store, free Aug. 22)
  • Skull and Bones (New release on Steam, Aug. 22)
  • Endzone 2 (New release on Steam, Aug. 26)
  • Age of Mythology: Retold (Advanced access on Steam, Xbox, available on PC Game Pass, Aug. 27)
  • Core Keeper (New release on Xbox, available on PC Game Pass, Aug. 27)
  • Alan Wake’s American Nightmare (Xbox, available on Microsoft Store)
  • Car Manufacture (Steam)
  • Cat Quest III (Steam)
  • Commandos 3 – HD Remaster (Xbox, available on Microsoft Store)
  • Cooking Simulator (Xbox, available on PC Game Pass)
  • Crown Trick (Xbox, available on Microsoft Store)
  • Darksiders Genesis (Xbox, available on Microsoft Store)
  • Desperados III (Xbox, available on Microsoft Store)
  • The Dungeon of Naheulbeuk: The Amulet of Chaos (Xbox, available on Microsoft Store)
  • Expeditions: Rome (Xbox, available on Microsoft Store)
  • The Flame in the Flood (Xbox, available on Microsoft Store)
  • FTL: Faster Than Light (Xbox, available on Microsoft Store)
  • Genesis Noir (Xbox, available on PC Game Pass)
  • House Flipper (Xbox, available on PC Game Pass)
  • Into the Breach (Xbox, available on Microsoft Store)
  • Iron Harvest (Xbox, available on Microsoft Store)
  • The Knight Witch (Xbox, available on Microsoft Store)
  • Lightyear Frontier (Xbox, available on PC Game Pass)
  • Medieval Dynasty (Xbox, available on PC Game Pass)
  • Metro Exodus Enhanced Edition (Xbox, available on Microsoft Store)
  • My Time at Portia (Xbox, available on PC Game Pass)
  • Night in the Woods (Xbox, available on Microsoft Store )
  • Offworld Trading Company (Xbox, available on PC Game Pass)
  • Orwell: Keeping an Eye on You (Xbox, available on Microsoft Store)
  • Outlast 2 (Xbox, available on Microsoft Store)
  • Project Winter (Xbox, available on Microsoft Store)
  • Psychonauts (Steam)
  • Psychonauts 2 (Steam and Xbox, available on PC Game Pass)
  • Shadow Tactics: Blades of the Shogun (Xbox, available on Microsoft Store)
  • Sid Meier’s Civilization VI (Steam, Epic Games Store and Xbox, available on the Microsoft store)
  • Sid Meier’s Civilization V (Steam)
  • Sid Meier’s Civilization IV (Steam)
  • Sid Meier’s Civilization: Beyond Earth (Steam)
  • Spirit of the North (Xbox, available on PC Game Pass)
  • SteamWorld Heist II (Steam, Xbox, available on Microsoft Store)
  • Visions of Mana Demo (Steam)
  • This War of Mine (Xbox, available on PC Game Pass)
  • We Were Here Too (Steam)
  • Wreckfest (Xbox, available on PC Game Pass)
  • Yoku’s Island Express (Xbox, available on Microsoft Store)

Breachway was originally included in the August games list, but the launch date was moved to September by the developer. Stay tuned to GFN Thursday for updates.

Starting in October, members will no longer see the option of launching “Epic Games Store” versions of games published by Ubisoft on GeForce NOW.  To play these supported games, members can select the “Ubisoft Connect” option on GeForce NOW and will need to connect their Ubisoft Connect and Epic game store accounts the first time they play the game. Check out more details.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Volvo Cars EX90 SUV Rolls Out, Built on NVIDIA Accelerated Computing and AI

Volvo Cars EX90 SUV Rolls Out, Built on NVIDIA Accelerated Computing and AI

Volvo Cars’ new, fully electric EX90 is making its way from the automaker’s assembly line in Charleston, South Carolina, to dealerships around the U.S.

To ensure its customers benefit from future improvements and advanced safety features and capabilities, the Volvo EX90 is built on the NVIDIA DRIVE Orin system-on-a-chip (SoC), capable of more than 250 trillion operations per second (TOPS).

Running NVIDIA DriveOS, the system delivers high-performance processing in a package that’s literally the size of a postage stamp. This core compute architecture handles all vehicle functions, ranging from enabling safety and driving assistance features to supporting the development of autonomous driving capabilities — all while delivering an excellent user experience.

The state-of-the-art SUV is an intelligent mobile device on wheels, equipped with the automaker’s most advanced sensor suite to date, including radar, lidar, cameras, ultrasonic sensors and more. NVIDIA DRIVE Orin enables real-time, redundant and advanced 360-degree surround-sensor data processing, supporting Volvo Cars’ unwavering commitment to safety.

DRIVE Thor Powering the Next Generation of Volvo Cars

Setting its sights on the future, Volvo Cars also announced plans to migrate to the next-generation NVIDIA DRIVE Thor SoC for its upcoming fleets.

Before the end of the decade, Volvo Cars will move to NVIDIA DRIVE Thor, which boasts 1,000 TOPS —  quadrupling the processing power of a single DRIVE Orin SoC, while improving energy efficiency sevenfold.

The next-generation DRIVE Thor autonomous vehicle processor incorporates the latest NVIDIA Blackwell GPU architecture, helping unlock a new realm of possibilities and capabilities both in and around the car. This advanced platform will facilitate the deployment of safe advanced driver-assistance system (ADAS) and self-driving features — and pave the way for a new era of in-vehicle experiences powered by generative AI.

Highlighting Volvo Cars’ leap to NVIDIA’s next-generation processor, Volvo Cars CEO Jim Rowan noted, “With NVIDIA DRIVE Thor in our future cars, our in-house developed software becomes more scalable across our product lineup, and it helps us to continue to improve the safety in our cars, deliver best-in-class customer experiences — and increase our margins.”

Zenseact Strategic Investment in NVIDIA Technology

Volvo Cars and its software subsidiary, Zenseact, are also investing in NVIDIA DGX systems for AI model training in the cloud, helping ensure that future fleets are equipped with the most advanced and well-tested AI-powered safety features.

Managing the massive amount of data needed to safely train the next generation of AI-enabled vehicles demands data-center-level compute and infrastructure.

NVIDIA DGX systems provide the computational performance essential for training AI models with unprecedented efficiency. Transportation companies use them to speed autonomous technology development in a cost-effective, enterprise-ready and easy-to-deploy way.

Volvo Cars and Zenseact’s AI training hub, based in the Nordics, will use the systems to help catalyze multiple facets of ADAS and autonomous driving software development. A key benefit is the optimization of the data annotation process — a traditionally time-consuming task involving the identification and labeling of objects for classification and recognition.

The cluster of DGX systems will also enable processing of the required data for safety assurance, delivering twice the performance and potentially halving time to market.

“The NVIDIA DGX AI supercomputer will supercharge our AI training capabilities, making this in-house AI training center one of the largest in the Nordics,” said Anders Bell, chief engineering and technology officer at Volvo Cars. “By leveraging NVIDIA technology and setting up the data center, we pave a quick path to high-performing AI, ultimately helping make our products safer and better.”

With NVIDIA technology as the AI brain inside the car and in the cloud, Volvo Cars and Zenseact can deliver safe vehicles that allow customers to drive with peace of mind, wherever the road may lead.

Read More

Manufacturing Intelligence: Deltia AI Delivers Assembly Line Gains With NVIDIA Metropolis and Jetson

Manufacturing Intelligence: Deltia AI Delivers Assembly Line Gains With NVIDIA Metropolis and Jetson

It all started at Berlin’s Merantix venture studio in 2022, when Silviu Homoceanu and Max Fischer agreed AI could play a big role in improving manufacturing. So the two started Deltia.ai, which runs NVIDIA Metropolis vision AI on NVIDIA Jetson AGX Orin modules to measure and help optimize assembly line processes.

Hailing from AI backgrounds, Homoceanu had previously led self-driving software at Volkswagen, while Fischer had founded a startup that helped digitize more than 40 factories.

Deltia, an NVIDIA Metropolis partner, estimates that today its software platform can provide as much as a 20% performance jump on production lines for its customers.

Customers using the Deltia platform include Viessman, a maker of heating pumps, and industrial electronics company ABB, among others. Viessman is running Deltia at 15 stations, and plans to add it to even more lines in the future. Once all lines are linked to Deltia, production managers say that they expect up to a 50% increase in overall productivity.

“We provide our users with a dashboard that is basically the Google Analytics of manufacturing,” said Homoceanu, Deltia’s CTO. “We install these sensors, and two weeks later they get the keys to this dashboard, and the magic happens in the background.”

Capturing Assembly Line Insights for Digital Transformations  

Once the cameras start gathering data on assembly lines, Deltia uses that information to train models on NVIDIA-accelerated computing that can monitor activities on the line. It then uses those models deployed on Jetson AGX Orin modules at the edge to gather operational insights.

These Jetson-based systems continuously monitor the camera streams and extract metadata. This metadata identifies the exact points in time when a product arrives at a specific station, when it is being worked on and when it leaves the station. This digital information is available to line managers and process improvement personnel via Deltia’s custom dashboard, helping to identify bottlenecks and accelerate line output.

“TensorRT helps us compress complex AI models to a level where we can serve, in an economical fashion, multiple stations with a single Jetson device,” said Homoceanu.

Tapping Into Jetson Orin for Edge AI-Based Customer Insights 

Beyond identifying quick optimizations, Deltia’s analytics help visualize production flows hour-by-hour. This means that Deltia can send rapid alerts when production slips away from predicted target ranges, and it can continuously track output, cycle times and other critical key performance indicators.

It also helps map how processes flow throughout a factory floor, and it suggests improvements for things like walking routes and shop-floor layouts. One of Deltia’s customers used the platform to identify that materials shelves were too far from workers, which caused unnecessarily long cycle times and limited production. Once the shelves were moved, production went up more than 30%.

Deltia’s applications extend beyond process improvements. The platform can be used to help monitor machine states at a granular level, assisting to predict when machine parts are worn out and recommend preemptive replacements, saving time and money down the line. The platform can also suggest optimizations for energy usage, saving on operational costs and reducing maintenance expenses.

“Our vision is to empower manufacturers with the tools to achieve unprecedented efficiency,” said Fischer, CEO of Deltia.ai. “Seeing our customers experience as much as a 30% increase in productivity with our vision models running on NVIDIA Jetson Orin validates the transformative potential of our technology.”

Deltia is a member of the NVIDIA Inception program for cutting-edge startups.

Learn more about NVIDIA Metropolis and NVIDIA Jetson.

Read More

Hammer Time: Machina Labs’ Edward Mehr on Autonomous Blacksmith Bots and More

Hammer Time: Machina Labs’ Edward Mehr on Autonomous Blacksmith Bots and More

Edward Mehr works where AI meets the anvil.  The company he cofounded, Machina Labs, blends the latest advancements in robotics and AI to form metal into countless shapes for use in defense, aerospace, and more. The company’s applications accelerate design and innovation, enabling rapid iteration and production in days instead of the months required by conventional processes. NVIDIA AI Podcast host Noah Kravitz speaks with Mehr, CEO of Machina Labs, on how the company uses AI to develop the first-ever robotic blacksmith. Its Robotic Craftsman platform integrates seven-axis robots that can shape, scan, trim and drill a wide range of materials — all capabilities made possible through AI.

Time Stamps

1:12: What does Machina Labs do?
3:37: Mehr’s background
8:45: Machina Lab’s manufacturing platform, the Robotic Craftsman
10:39: Machina Lab’s history and how AI plays a role in its work
15:07: The versatility of the Robotic Craftsman
21:48: How the Robotic Craftsman was trained in simulations using AI-generated manufacturing data
28:10: From factory to household — Mehr’s insight on the future of robotic applications

You Might Also Like:

How Two Stanford Students Are Building Robots for Handling Household Chores – Ep. 224

BEHAVIOR-1K is a robot that can perform 1,000 household chores, including picking up fallen objects or cooking. In this episode, Stanford Ph.D. students Chengshu Eric Li and Josiah David Wong discuss the breakthroughs and challenges they experienced while developing BEHAVIOR-1K.

Hittin’ the Sim: NVIDIA’s Matt Cragun on Conditioning Autonomous Vehicles in Simulation – Ep. 185

NVIDIA DRIVE Sim, built on Omniverse, provides a virtual proving ground for AV testing and validation. It’s a highly accurate simulation platform that can enable groundbreaking tools — including synthetic data and neural reconstruction — to build digital twins of driving environments. In this episode, Matt Cragun, senior product manager for AV simulation at NVIDIA, details the origins and inner workings of DRIVE Sim.

NVIDIA’s Liila Torabi Talks the New Era of Robotics Through Isaac Sim – Ep. 147

Robotics are not just limited to the assembly line. Liila Torabi, senior product manager for NVIDIA Isaac Sim, works on making the next generation of robotics possible. In this episode, she discusses the new era of robotics — one driven by making robots smarter through AI.

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint – Ep. 129

Pindar Van Arman is an American artist and roboticist, designing painting robots that explore the intersection of human and computational creativity. He’s built multiple artificially creative robots, the most famous of which being Cloud Painter, which was awarded first place at Robotart 2018. Tune in to hear how Van Arman deconstructs his own artistic process and teaches it to robots.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More