NVIDIA and Microsoft Give AI Startups a Double Dose of Acceleration

NVIDIA is expanding its collaboration with Microsoft to support global AI startups across industries — with an initial focus on healthcare and life sciences companies.

Announced today at the HLTH healthcare innovation conference, the initiative connects the startup ecosystem by bringing together the NVIDIA Inception global program for cutting-edge startups and Microsoft for Startups to broaden innovators’ access to accelerated computing by providing cloud credits, software for AI development and the support of technical and business experts.

The first phase will focus on high-potential digital health and life sciences companies that are part of both programs. Future phases will focus on startups in other industries.

Microsoft for Startups will provide each company with $150,000 of Microsoft Azure credits to access leading AI models, up to $200,000 worth of Microsoft business tools, and priority access to its Pegasus Program for go-to-market support.

NVIDIA Inception will provide 10,000 ai.nvidia.com inference credits to run GPU-optimized AI models through NVIDIA-managed serverless APIs; preferred pricing on NVIDIA AI Enterprise, which includes the full suite of NVIDIA Clara healthcare and life sciences computing platforms, software and services; early access to new NVIDIA healthcare offerings; and opportunities to connect with investors through the Inception VC Alliance and with industry partners through the Inception Alliance for Healthcare.

Both companies will provide the selected startups with dedicated technical support and hands-on workshops to develop digital health applications with the NVIDIA technology stack on Azure.

Supporting Startups at Every Stage

Hundreds of companies are already part of both NVIDIA Inception and Microsoft for Startups, using the combination of accelerated computing infrastructure and cutting-edge AI to advance their work.

Artisight, for example, is a smart hospital startup using AI to improve operational efficiency, documentation and care coordination in order to reduce the administrative burden on clinical staff and improve the patient experience. Its smart hospital network includes over 2,000 cameras and microphones at Northwestern Medicine, in Chicago, and over 200 other hospitals.

The company uses speech recognition models that can automate patient check-in with voice-enabled kiosks and computer vision models that can alert nurses when a patient is at risk of falling. Its products use software including NVIDIA Riva for conversational AI, NVIDIA DeepStream for vision AI and NVIDIA Triton Inference server to simplify AI inference in production.

“Access to the latest AI technologies is critical to developing smart hospital solutions that are reliable enough to be deployed in real-world clinical settings,” said Andrew Gostine, founder and CEO of Artisight. “The support of NVIDIA Inception and Microsoft for Startups has enabled our company to scale our products to help top U.S. hospitals care for thousands of patients.”

Another company, Pangaea Data, is helping healthcare organizations and pharmaceutical companies identify patients who remain undertreated or untreated despite available intelligence in their existing medical records. The company’s PALLUX platform supports clinicians at the point of care by finding more patients for screening and treatment. Deployed with NVIDIA GPUs on Azure’s HIPAA-compliant, secure cloud environment, PALLUX uses the NVIDIA FLARE federated learning framework to preserve patient privacy while driving improvement in health outcomes.

PALLUX helped one healthcare provider find 6x more cancer patients with cachexia — a condition characterized by loss of weight and muscle mass — for treatment and clinical trials. Pangaea Data’s platform achieved 90% accuracy and was deployed on the provider’s existing infrastructure within 12 weeks.

“By building our platform on a trusted cloud environment, we’re offering healthcare providers and pharmaceutical companies a solution to uncover insights from existing health records and realize the true promise of precision medicine and preventative healthcare,” said Pangaea Data CEO Vibhor Gupta. “Microsoft and NVIDIA have supported our work with powerful virtual machines and AI software, enabling us to focus on advancing our platform, rather than infrastructure management.”

Other startups participating in both programs and using NVIDIA GPUs on Azure include: 

  • Artificial, a lab orchestration startup that enables researchers to digitize end-to-end scientific workflows with AI tools that optimize scheduling, automate data entry tasks and guide scientists in real time using virtual assistants. The company is exploring the use of NVIDIA BioNeMo, an AI platform for drug discovery.
  • BeeKeeperAI, which enables secure computing on sensitive data, including regulated data that can’t be anonymized or de-identified. Its EscrowAI platform integrates trusted execution environments with confidential computing and other privacy-enhancing technologies — including NVIDIA H100 Tensor Core GPUs — to meet data protection requirements and protect data sovereignty, individual privacy and intellectual property.
  • Niramai, a startup that has developed an AI-powered medical device for early breast cancer detection. Its Thermalytix solution is a low-cost, portable screening tool that has been used to help screen over 250,000 women in 18 countries.

Building on a Trove of Healthcare Resources

Microsoft earlier this year announced a collaboration with NVIDIA to boost healthcare and life sciences organizations with generative AI, accelerated computing and the cloud.

Aimed at supporting projects in clinical research, drug discovery, medical imaging and precision medicine, this collaboration brought together Microsoft Azure with NVIDIA DGX Cloud, an end-to-end, scalable AI platform for developers.

It also provides users of NVIDIA DGX Cloud on Azure access to NVIDIA Clara, including domain-specific resources such as NVIDIA BioNeMo, a generative AI platform for drug discovery; NVIDIA MONAI, a suite of enterprise-grade AI for medical imaging; and NVIDIA Parabricks, a software suite designed to accelerate processing of sequencing data for genomics applications.

Join the Microsoft for Startups Founders Hub and the NVIDIA Inception program. 

Read More

Waterways Wonder: Clearbot Autonomously Cleans Waters With Energy-Efficient AI

Waterways Wonder: Clearbot Autonomously Cleans Waters With Energy-Efficient AI

What started as two classmates seeking a free graduation trip to Bali subsidized by a university project ended up as an AI-driven sea-cleaning boat prototype built of empty water bottles, hobbyist helicopter blades and a GoPro camera.

University of Hong Kong grads Sidhant Gupta and Utkarsh Goel have since then made a splash with their Clearbot autonomous trash collection boats enabled by NVIDIA Jetson.

“We came up with the idea to clean the water there because there are a lot of dirty beaches, and the local community depends on them to be clean for their tourism business,” said Gupta, who points out the same is true for touristy regions of Hong Kong and India, where they do business now.

Before launching Clearbot, in 2021, the university friends put up their proof-of-concept waste collection boat on a website and then just forgot about it, he said, starting work after graduation. A year later, a marine construction company proposed a water cleanup project, and the pair developed their prototype around the effort to remove three tons of trash daily from a Hong Kong marine construction site.

“They were using a big boat and a crew of three to four people every day, at a cost of about $1,000 per day — that’s when we realized we can build this and do it better and at lower cost,” said Gupta.

Plastic makes up about 85% of ocean litter, with an estimated 11 million metric tons entering oceans every year, according to the United Nations Environment Programme. Clearbot aims to remove waste from waterways before it gets into oceans.

Cleaning Waters With Energy-Efficient Jetson Xavier

Clearbot, based in Hong Kong and India, has 24 employees developing and deploying its water-cleaning electric-powered boats that can self-dock at solar charging stations.

We believe that humanity’s relationship with the ocean is sort of broken — the question is can we make that better and is there a better future outcome? We can do it 100% emissions-free, so you’re not creating pollution while you’re cleaning pollution. Sidhant Gupta, co-founder of Clearbot

The ocean vessels, ranging in length from 10 to 16 feet, have two cameras — one for navigation and another for waste identification of what boats have scooped up. The founders trained garbage models on cloud and desktop NVIDIA GPUs from images they took in their early days, and now they have large libraries of images from collecting on cleanup sites. They’ve also trained models that enable the Clearbot to autonomously navigate away from obstacles. 

The energy-efficient Jetson Xavier NX allows the water-cleaning boats — propelled by battery-driven motors — to collect for eight hours at a time before returning to recharge.   

Harbors and other waterways frequented by tourists and businesses often rely on diesel-powered boats with workers using nets to remove garbage, said Gupta. Traditionally, a crew of 50 people in such scenarios can run about 15 or 20 boats, estimates Gupta. With Clearbot, a crew of 50 people can run about 150 boats, boosting intake, he said.  

“We believe that humanity’s relationship with the ocean is sort of broken — the question is can we make that better and is there a better future outcome?” said Gupta. “We can do it 100% emissions-free, so you’re not creating pollution while you’re cleaning pollution.” 

Customers Harnessing Clearbot for Environmental Benefits

Kingspan, a maker of building materials, is working with Clearbot to clean up trash and oil in rivers and lakes in Nongstoin, India. So far, the work has resulted in the removal of 1.2 tons of waste per month in the area. 

Umiam Lake in Meghalaya, India, has long been a tourist destination and place for fishing. However, it’s become so polluted, that areas of the water’s surface aren’t visible with all of the floating trash. 

The region’s leadership is working with Clearbot in a project with the University of California Berkeley Haas School of Business to help remove the trash from the lake. Since the program began three months ago, Clearbot has collected 15 tons of waste.

Mitigating Environmental Impacts With Clearbot Data 

Clearbot has expanded its services beyond trash collection to address environmental issues more broadly. The company is now assisting in marine pollution control for sewage, oil, gas and other chemical spills as well as undersea inspections for dredging projects, examining algae growth and many other areas where its autonomous boats can capture data.

Unforeseen to Clearbot’s founders, they have discovered that the data about garbage collection and other environmental pollutants can be used in mitigation strategies. The images that they collect are geotagged, so if somebody is trying to find the source of a problem, backtracking from Clearbot’s software dashboard on some of the data on findings is a good place to start.

For example, if there’s a concentration of plastic bottle waste in a particular area, and it’s of a particular type, local agencies could track back to where it’s coming from. This could allow local governments to mitigate the waste by reaching out to the polluter to put a stop to the activity that is causing it, said Gupta.

“Let’s say I’m a municipality and I want to ban plastic bags in my area — you need the NGOs, the governments and the change makers to acquire the data to back their justifications for why they want to close down the plastic plant up the stream,” said Gupta. “That data is being generated on board your NVIDIA Jetson Xavier.”

Learn about NVIDIA Jetson Xavier and Earth-2

Read More

Sustainable Manufacturing and Design: How Digital Twins Are Driving Efficiency and Cutting Emissions

Sustainable Manufacturing and Design: How Digital Twins Are Driving Efficiency and Cutting Emissions

Improving the sustainability of manufacturing involves optimizing entire product lifecycles — from material sourcing and transportation to design, production, distribution and end-of-life disposal.

According to the International Energy Agency, reducing the carbon footprint of industrial production by just 1% could save 90 million tons of CO₂ emissions annually. That’s equivalent to taking more than 20 million gasoline-powered cars off the road each year.

Technologies such as digital twins and accelerated computing are enabling manufacturers to reduce emissions, enhance energy efficiency and meet the growing demand for environmentally conscious production.

Siemens and NVIDIA are at the forefront of developing technologies that help customers achieve their sustainability goals and improve production processes.

Key Challenges in Sustainable Manufacturing

Balancing sustainability with business objectives like profitability remains a top concern for manufacturers. A study by Ernst & Young in 2022 found that digital twins can reduce construction costs by up to 35%, underscoring the close link between resource consumption and construction expenses.

Yet, one of the biggest challenges in driving sustainable manufacturing and reducing overhead is the presence of silos between departments, different plants within the same organization and across production teams. These silos arise from a variety of issues, including conflicting priorities and incentives, a lack of common energy-efficiency metrics and language, and the need for new skills and solutions to bridge these gaps.

Data management also presents a hurdle, with many manufacturers struggling to turn vast amounts of data into actionable insights — particularly those that can impact sustainability goals.

According to a case study by The Manufacturer, a quarter of respondents surveyed acknowledged that their data shortcomings negatively impact energy efficiency and environmental sustainability, with nearly a third reporting that data is siloed to local use cases.

Addressing these challenges requires innovative approaches that break down barriers and use data to drive sustainability. Acting as a central hub for information, digital twin technology is proving to be an essential tool in this effort.

The Role of Digital Twins in Sustainable Manufacturing

Industrial-scale digital twins built on the NVIDIA Omniverse development platform and Universal Scene Description (OpenUSD) are transforming how manufacturers approach sustainability and scalability.

These technologies power digital twins that take engineering data from various sources and contextualize it as it would appear in the real world. This breaks down information silos and offers a holistic view that can be shared across teams — from engineering to sales and marketing.

This enhanced visibility enables engineers and designers to simulate and optimize product designs, facility layouts, energy use and manufacturing processes before physical production begins. That allows for deeper insights and collaboration by helping stakeholders make more informed decisions to improve efficiency and reduce costly errors and last-minute changes that can result in significant waste.

To further transform how products and experiences are designed and manufactured, Siemens is integrating NVIDIA Omniverse Cloud application programming interfaces into its Siemens Xcelerator platform, starting with Teamcenter X, its cloud-based product lifecycle management software.

These integrations enable Siemens to bring the power of photorealistic visualization to complex engineering data and workflows, allowing companies to create physics-based digital twins that help eliminate workflow waste and errors.

Siemens and NVIDIA have demonstrated how companies like HD Hyundai, a leader in sustainable ship manufacturing, are using these new capabilities to visualize and interact with complex engineering data at new levels of scale and fidelity.

HD Hyundai is unifying and visualizing complex engineering projects directly within Teamcenter X.

Physics-based digital twins are also being utilized to test and validate robotics and physical AI before they’re deployed into real-world manufacturing facilities.

Foxconn, the world’s largest electronics manufacturer, has introduced a virtual plant that pushes the boundaries of industrial automation. Foxconn’s digital twin platform, built on Omniverse and NVIDIA Isaac, replicates a new factory in the Guadalajara, Mexico, electronics hub to allow engineers to optimize processes and train robots for efficient production of NVIDIA Blackwell systems.

By simulating the factory environment, engineers can determine the best placement for heavy robotic arms, optimize movement and maximize safe operations while strategically positioning thousands of sensors and video cameras to monitor the entire production process.

Foxconn’s virtual factory uses a digital twin powered by the NVIDIA Omniverse and NVIDIA Isaac platforms to produce NVIDIA Blackwell systems.

The use of digital twins, like those in Foxconn’s virtual factory, is becoming increasingly common in industrial settings for simulation and testing.

Foxconn’s chairman, Young Liu, highlighted how the digital twin will lead to enhanced automation and efficiency, resulting in significant savings in time, cost and energy. The company expects to increase manufacturing efficiency while reducing energy consumption by over 30% annually.

By connecting data from Siemens Xcelerator software to its platform built on NVIDIA Omniverse and OpenUSD, the virtual plant allows Foxconn to design and train robots in a realistic, simulated environment, revolutionizing its approach to automation and sustainable manufacturing.

Making Every Watt Count

One consideration for industries everywhere is how the rising demand for AI is outpacing the adoption of renewable energy. This means business leaders, particularly manufacturing plant and data center operators, must maximize energy efficiency and ensure every watt is utilized effectively to balance decarbonization efforts alongside AI growth.

The best and simplest means of optimizing energy use is to accelerate every possible workload.

Using accelerated computing platforms that integrate both GPUs and CPUs, manufacturers can significantly enhance computational efficiency.

GPUs, specifically designed for handling complex calculations, can outperform traditional CPU-only systems in AI tasks. These systems can be up to 20x more energy efficient when it comes to AI inference and training.

This leap in efficiency has fueled substantial gains over the past decade, enabling AI to address more complex challenges while maintaining energy-efficient operations.

Building on these advances, businesses can further reduce their environmental impact by adopting key energy management strategies. These include implementing energy demand management and efficiency measures, scaling battery storage for short-duration power outages, securing renewable energy sources for baseload electricity, using renewable fuels for backup generation and exploring innovative ideas like heat reuse.


Join the Siemens and NVIDIA session at the 7X24 Exchange 2024 Fall Conference to discover how digital twins and AI are driving sustainable solutions across data centers.


The Future of Sustainable Manufacturing: Industrial Digitalization

The next frontier in manufacturing is the convergence of the digital and physical worlds in what is known as industrial digitalization, or the “industrial metaverse.” Here, digital twins become even more immersive and interactive, allowing manufacturers to make data-driven decisions faster than ever.

“We will revolutionize how products and experiences are designed, manufactured and serviced,” said Roland Busch, president and CEO of Siemens AG. “On the path to the industrial metaverse, this next generation of industrial software enables customers to experience products as they would in the real world: in context, in stunning realism and — in the future — interact with them through natural language input.”

Leading the Way With Digital Twins and Sustainable Computing

Siemens and NVIDIA’s collaboration showcases the power of digital twins and accelerated computing for reducing the environmental impact caused by the manufacturing industry every year. By leveraging advanced simulations, AI insights and real-time data, manufacturers can reduce waste and increase energy efficiency on their path to decarbonization.

Learn more about how Siemens and NVIDIA are accelerating sustainable manufacturing.

Read about NVIDIA’s sustainable computing efforts and check out the energy-efficiency calculator to discover potential energy and emissions savings.

Read More

Get Ready to Slay: ‘Dragon Age: The Veilguard’ to Soar Into GeForce NOW at Launch

Get Ready to Slay: ‘Dragon Age: The Veilguard’ to Soar Into GeForce NOW at Launch

Bundle up this fall with GeForce NOW and Dragon Age: The Veilguard with a special, limited-time promotion just for members.

The highly anticipated role-playing game (RPG) leads 10 titles joining the ever-growing GeForce NOW library of over 2,000 games.

A Heroic Bundle

Dragon Age: The Veilguard on GeForce NOW
The mother of dragon bundles.

Fight for Thedas’ future at Ultimate quality this fall as new and existing members who purchase six months of GeForce NOW Ultimate can get BioWare and Electronic Arts’ epic RPG Dragon Age: The Veilguard for free when it releases on Oct. 31.

Rise as Rook, Dragon Age’s newest hero. Lead a team of seven companions, each with their own unique story, against a new evil rising in Thedas. The latest entry in the legendary Dragon Age franchise lets players customize their characters and engage with new romancable companions whose stories unfold over time. Band together and become the Veilguard.

Ultimate members can experience BioWare’s latest entry at full GeForce quality, with support for NVIDIA DLSS 3, low-latency gameplay with NVIDIA Reflex, and enhanced image quality and immersion with ray-traced ambient occlusion and reflections. Ultimate members can also play popular PC games at up to 4K resolution with extended session lengths, even on low-spec devices.

Move fast — this bundle is only available for a limited time until Oct. 30.

Supernatural Thrills, Super New Games

New World Aeternum
Eternal adventure, instant access.

New World: Aeternum is the latest content for Amazon Games’ hit action RPG. Available for members to stream at launch this week, it offers a thrilling action RPG experience in a vast, perilous world. Explore the mysterious island, encounter diverse creatures, face supernatural dangers and uncover ancient secrets.

The game’s action-oriented combat system and wide variety of weapons allow for diverse playstyles, while the crafting and progression systems offer depth for long-term engagement. Then, grab the gaming squad for intense combat and participate in large-scale battles for territorial control.

Members can look for the following games available to stream in the cloud this week:

  • Neva (New release on Steam, Oct. 15)
  • MechWarrior 5: Clans (New release on Steam and Xbox, available on PC Game Pass, Oct. 16)
  • A Quiet Place: The Road Ahead (New release on Steam, Oct. 17)
  • Assassin’s Creed Mirage (New release on Steam, Oct. 17)
  • Artisan TD (Steam) 
  • ASKA (Steam)
  • Dungeon Tycoon (Steam)
  • South Park: The Fractured But Whole (Available on PC Game Pass, Oct 16. Members will need to activate access)
  • Spirit City: Lofi Sessions (Steam)
  • Star Trucker (Xbox, available on Game Pass)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

‘We Would Like to Achieve Superhuman Productivity,’ NVIDIA CEO Says as Lenovo Brings Smarter AI to Enterprises

‘We Would Like to Achieve Superhuman Productivity,’ NVIDIA CEO Says as Lenovo Brings Smarter AI to Enterprises

Moving to accelerate enterprise AI innovation, NVIDIA founder and CEO Jensen Huang joined Lenovo CEO Yuanqing Yang on stage Tuesday during the keynote at Lenovo Tech World 2024.

Together, they introduced the Lenovo Hybrid AI Advantage with NVIDIA, a full-stack platform for building and deploying AI capabilities across the enterprise that drive speed, innovation and productivity.

“We would like to achieve essentially superhuman productivity,” Huang told a crowd gathered in-person and online for Lenovo’s Seattle event. “And these AI agents are helping employees across industries to be more efficient and productive.”

They also unveiled a new high-performance AI server featuring Lenovo’s Neptune liquid-cooling technology and NVIDIA Blackwell, marking a leap forward in sustainability and energy efficiency for AI systems.

“This is going to be the largest of industrial revolutions we’ve ever seen,” Huang noted, highlighting the profound impact AI is having on industries worldwide. “And we’re seeing, in the last 12 months or so, just an extraordinary awakening in every single industry, every single company, every single country.”

Lenovo Unveils Hybrid AI Advantage With NVIDIA

The Lenovo Hybrid AI Advantage with NVIDIA is built on Lenovo’s services and infrastructure capabilities with NVIDIA AI software and accelerated computing. It enables organizations to create agentic AI and physical AI that transform data into actionable business outcomes more efficiently.

“Our strategy is to combine modularization with customization so that we can respond quickly to customer needs while tailoring our solutions for them,” Yang said.

Introducing Lenovo AI Fast Start and Hybrid AI Solutions

As part of the Lenovo Hybrid AI Advantage, Lenovo has introduced Lenovo AI Fast Start, a service designed to help organizations rapidly build generative AI solutions.

Leveraging the NVIDIA AI Enterprise software platform, which includes NVIDIA NIM microservices and NVIDIA NeMo for building AI agents, Lenovo AI Fast Start enables customers to prove the business value of AI use cases across personal, enterprise, and public AI platforms within weeks.

By giving organizations access to AI assets, experts, and partners, the service helps tailor solutions to meet the needs of each business, speeding up deployment at scale.This platform also includes the Lenovo AI Service Library and uses NVIDIA AI Enterprise software, including NVIDIA NIM, NVIDIA NeMo and NVIDIA NIM Agent Blueprints for agentic AI, as well as support for NVIDIA Omniverse for physical AI.

The AI Service Library offers a collection of preconfigured AI solutions that can be customized for different needs.

When these offerings are combined with NIM Agent Blueprints, businesses can rapidly develop and deploy AI agents tailored to their specific needs, accelerating AI adoption across industries.

With the addition of NeMo for large language model optimization and Omniverse for digital twin simulations, enterprises can use cutting-edge AI technologies for both agentic and physical AI applications.

Energy Efficiency and AI Infrastructure

Yang and Huang emphasized the critical need for energy-efficient AI infrastructure.

“Speed is sustainability. Speed is performance. Speed is energy efficiency,” Huang said, stressing how performance improvements directly contribute to reducing energy consumption and increasing efficiency.

“Lenovo’s 6th Generation Neptune Liquid Cooling solution supports AI computing and high-performance computing while delivering better energy efficiency,” Yang said.

By reducing data center power consumption by up to 40%, Neptune allows businesses to efficiently run accelerated AI workloads while lowering operational costs and environmental impact.

In line with this, Lenovo’s TruScale infrastructure services offer a scalable cloud-based model that gives organizations access to AI computing power without the need for large upfront investments in physical infrastructure, ensuring businesses can scale deployments as needed.

Introducing Lenovo ThinkSystem SC777 V4 Neptune With NVIDIA Blackwell

The CEOs revealed the ThinkSystem SC777 V4 Neptune server, featuring NVIDIA GB200 Grace Blackwell.

This 100% liquid-cooled system requires no fans or specialized data center air conditioning. It fits into a standard rack and runs on standard power.

“To an engineer, this is sexy,” Huang said, referring to the ThinkSystem SC777 V4 Neptune server he and Yang had just unveiled.

The SC777 includes next-gen NVIDIA NVLink interconnect, supporting NVIDIA Quantum-2 InfiniBand or Spectrum-X Ethernet networking. It also supports NVIDIA AI Enterprise software with NIM microservices.

“Our partnership spans from infrastructure to software and to service level,” Yang said. “Together, we deploy enterprise AI agents to our customers.”

Read More

MAXimum AI: RTX-Accelerated Adobe AI-Powered Features Speed Up Content Creation

MAXimum AI: RTX-Accelerated Adobe AI-Powered Features Speed Up Content Creation

At the Adobe MAX creativity conference this week, Adobe announced updates to its Adobe Creative Cloud products, including Premiere Pro and After Effects, as well as to Substance 3D products and the Adobe video ecosystem.

These apps are accelerated by NVIDIA RTX and GeForce RTX GPUs — in the cloud or running locally on RTX AI PCs and workstations.

One of the most highly anticipated features is Generative Extend in Premiere Pro (beta), which uses generative AI to seamlessly add frames to the beginning or end of a clip. Powered by the Firefly Video Model, it’s designed to be commercially safe and only trained on content Adobe has permission to use, so artists can create with confidence.

Adobe Substance 3D Collection apps offer numerous RTX-accelerated features for 3D content creation, including ray tracing, AI delighting and upscaling, and image-to-material workflows powered by Adobe Firefly.

Substance 3D Viewer, entering open beta at Adobe MAX, is designed to unlock 3D in 2D design workflows by allowing 3D files to be opened, viewed and used across design teams. This will improve interoperability with other RTX-accelerated Adobe apps like Photoshop.

Adobe Firefly integrations have also been added to Substance 3D Collection apps, including Text to Texture, Text to Pattern and Image to Texture tools in Substance 3D Sampler, as well as Generative Background in Substance 3D Stager, to further enhance the 3D content creation with generative AI.

The October NVIDIA Studio Driver, designed to optimize creative apps, will be available for download tomorrow. For automatic Studio Driver notifications, as well as easy access to apps like NVIDIA Broadcast, download the NVIDIA app beta.

Video Editing Evolved

Adobe Premiere Pro has transformed video editing workflows over the last four years with features like Auto Reframe and Scene Edit Detection.

The recently launched GPU-accelerated Enhance Speech, AI Audio Category Tagging and Filler Word Detection features allow editors to use AI to intelligently cut and modify video scenes.

The Adobe Firefly Video Model — now available in limited beta at Firefly.Adobe.com — brings generative AI to video, marking the next advancement in video editing. It allows users to create and edit video clips using simple text prompts or images, helping fill in content gaps without having to reshoot, extend or reframe takes. It can also be used to create video clip prototypes as inspiration for future shots.

Topaz Labs has introduced a new plug-in for Adobe After Effects, a video enhancement software that uses AI models to improve video quality. This gives users access to enhancement and motion deblur models for sharper, clearer video quality. Accelerated on GeForce RTX GPUs, these models run nearly 2.5x faster on the GeForce RTX 4090 Laptop GPU compared with the MacBook Pro M3 Max.

Stay tuned for NVIDIA TensorRT enhancements and more Topaz Video AI effects coming to the After Effects plug-in soon.

3D Super Powered

The Substance 3D Collection is revolutionizing the ideation stage of 3D creation with powerful generative AI features in Substance 3D Sampler and Stager.

Sampler’s Text to Texture, Text to Pattern and Image to Texture tools, powered by Adobe Firefly, allow artists to rapidly generate reference images from simple prompts that can be used to create parametric materials.

Stager’s Generative Background feature helps designers explore backgrounds for staging 3D models, using text descriptions to generate images. Stager can then match lighting and camera perspective, allowing designers to explore more variations faster when iterating and mocking up concepts.

Substance 3D Viewer also offers a connected workflow with Photoshop, where 3D models can be placed into Photoshop projects and edits made to the model in Viewer will be automatically sent back to the Photoshop project. GeForce RTX GPU hardware acceleration and ray tracing provide smooth movement in the viewport, producing up to 80% higher frames per second on the GeForce RTX 4060 Laptop GPU compared to the MacBook M3 Pro.

There are also new Firefly-powered features in Substance 3D Viewer, like Text to 3D and 3D Model to Image, that combine text prompts and 3D objects to give artists more control when generating new scenes and variations.

The latest After Effects release features an expanded range of 3D tools that enable creators to embed 3D animations, cast ultra-realistic shadows on 2D objects and isolate effects in 3D space.

After Effects now also has an RTX GPU-powered Advanced 3D Renderer that accelerates the processing-intensive and time-consuming task of applying HDRI lighting — lowering creative barriers to entry while improving content realism. Rendering can be done 30% faster on a GeForce RTX 4090 GPU over the previous generation.

Pairing Substance 3D with After Effects native and fast 3D integration allows artists to significantly boost the visual quality of 3D in After Effects with precision texturing and access to more than 20,000 parametric 3D materials, IBL environment lights and 3D models.

Follow NVIDIA Studio on Instagram, X and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

NVIDIA AI Summit Panel Outlines Autonomous Driving Safety

NVIDIA AI Summit Panel Outlines Autonomous Driving Safety

The autonomous driving industry is shaped by rapid technological advancements and the need for standardization of guidelines to ensure the safety of both autonomous vehicles (AVs) and their interaction with human-driven vehicles.

At the NVIDIA AI Summit this week in Washington, D.C., industry experts shared viewpoints on this AV safety landscape from regulatory and technology perspectives.

Danny Shapiro, vice president of automotive at NVIDIA, led the wide-ranging conversation with Mark Rosekind, former administrator of the National Highway Traffic Safety Administration, and Marco Pavone, director of AV research at NVIDIA.

To frame the discussion, Shapiro kicked off with a sobering comment about the high number of crashes, injuries and fatalities on the world’s roadways. Human error remains a serious problem and the primary cause of these incidents.

“Improving safety on our roads is critical,” Shapiro said, noting that NVIDIA has been working for over two decades with the auto industry, including advanced driver assistance systems and fully autonomous driving technology development.

NVIDIA’s approach to AV development is centered on the integration of three computers: one for training the AI, one for simulation to test and validate the AI, and one in the vehicle to process sensor data in real time to make safe driving decisions. Together, these systems enable continuous development cycles, always improving the AV software in performance and safety.

Rosekind, a highly regarded automotive safety expert, spoke about the patchwork of regulations that exists across the U.S., explaining that federal agencies focus on the vehicle, while the states focus on the operator, including driver education, insurance and licensing.

Pavone commented on the emergence of new tools that allow researchers and developers to rethink how AV development is carried out, as a result of the explosion of new technologies related to generative AI and neural rendering, among others.

These technologies are enabling new developments in simulation, for example to generate complex scenarios aimed at stress testing vehicles for safety purposes. And they’re harnessing foundation models, such as vision language models, to allow developers to build more robust autonomy software, Pavone said.

NVIDIA AI Summit panelists

One of the relevant and timely topics discussed during the panel was an announcement made during the AI Summit by MITRE, a government-sponsored nonprofit research organization.

MITRE announced its partnership with Mcity at the University of Michigan to develop a virtual and physical AV validation platform for industry deployment.

MITRE will use Mcity’s simulation tools and a digital twin of its Mcity Test Facility, a real-world AV test environment in its digital proving ground. The jointly developed platform will deliver physically based sensor simulation enabled by NVIDIA Omniverse Cloud Sensor RTX applications programming interfaces.

By combining these simulation capabilities with the MITRE digital proving ground reporting and analysis framework, developers will be able to perform exhaustive testing in a simulated world to safely validate AVs before real-world deployment.

NVIDIA AI Summit AV safety panelists

Rosekind commented: The MITRE announcement “represents an opportunity to have a trusted source who’s done this in many other areas, especially in aviation, to create an independent, neutral setting to test safety assurance.”

“One of the most exciting things about this endeavor is that simulation is going to have a key role,” added Pavone. “Simulation allows you to test very dangerous conditions in a repeatable and varied way, so you can simulate different cases at scale.”

“That’s the beauty of simulation,” said Shapiro. “It’s repeatable, it’s controllable. We can control the weather in the simulation. We can change the time of day, and then we can control all the scenarios and inject hazards. Once the simulation is created, we can run it over and over, and as the software develops, we can ensure we are solving the problem, and can fine-tune as necessary.”

The panel wrapped up with a reminder that the key goal of autonomous driving is one that businesses and regulators alike share: to reduce death and injuries on our roadways.

Watch a replay of the session. (Registration required.)

To learn more about NVIDIA’s commitment to bringing safety to our roads, read the NVIDIA Self-Driving Safety Report.

Read More

Game-Changer: How the World’s First GPU Leveled Up Gaming and Ignited the AI Era

Game-Changer: How the World’s First GPU Leveled Up Gaming and Ignited the AI Era

In 1999, fans lined up at Blockbuster to rent chunky VHS tapes of The Matrix. Y2K preppers hoarded cash and canned Spam, fearing a worldwide computer crash. Teens gleefully downloaded Britney Spears and Eminem on Napster.

But amid the caffeinated fizz of turn-of-the-millennium tech culture, something more transformative was unfolding.

The release of NVIDIA’s GeForce 256 twenty-five years ago today, overlooked by all but hardcore PC gamers and tech enthusiasts at the time, would go on to lay the foundation for today’s generative AI.

The GeForce 256 wasn’t just another graphics card — it was introduced as the world’s first GPU, setting the stage for future advancements in both gaming and computing.

With hardware transform and lighting (T&L), it took the load off the CPU, a pivotal advancement. As Tom’s Hardware emphasized: “[The GeForce 256] can take the strain off the CPU, keep the 3D-pipeline from stalling, and allow game developers to use much more polygons, which automatically results in greatly increased detail.”

Where Gaming Changed Forever

For gamers, starting up Quake III Arena on a GeForce 256 was a revelation. “Immediately after firing up your favorite game, it feels like you’ve never even seen the title before this moment,” as the enthusiasts at AnandTech put it,

The GeForce 256 paired beautifully with breakthrough titles such Unreal Tournament, one of the first games with realistic reflections, which would go on to sell more than 1 million copies in its first year.

Over the next quarter-century, the collaboration between game developers and NVIDIA would continue to push boundaries, driving advancements such as increasingly realistic textures, dynamic lighting, and smoother frame rates — innovations that delivered far more than just immersive experiences for gamers.

NVIDIA’s GPUs evolved into a platform that transformed new silicon and software into powerful, visceral innovations that reshaped the gaming landscape.

In the decades to come, NVIDIA GPUs drove ever higher frame rates and visual fidelity, allowing for smoother, more responsive gameplay.

This leap in performance was embraced by platforms such as Twitch, YouTube Gaming, and Facebook, as gamers were able to stream content with incredible clarity and speed.

These performance boosts not only transformed the gaming experience but also turned players into entertainers. This helped fuel the global growth of esports.

Major events like The International (Dota 2), the League of Legends World Championship, and the Fortnite World Cup attracted millions of viewers, solidifying esports as a global phenomenon and creating new opportunities for competitive gaming.

From Gaming to AI: The GPU’s Next Frontier

As gaming worlds grew in complexity, so too did the computational demands.

The parallel power that transformed gaming graphics caught the attention of researchers, who realized these GPUs could also unlock massive computational potential in AI, enabling breakthroughs far beyond the gaming world.

Deep learning — a software model that relies on billions of neurons and trillions of connections — requires immense computational power.

Traditional CPUs, designed for sequential tasks, couldn’t efficiently handle this workload. But GPUs, with their massively parallel architecture, were perfect for the job.

By 2011, AI researchers had discovered NVIDIA GPUs and their ability to handle deep learning’s immense processing needs.

Researchers at Google, Stanford and New York University began using NVIDIA GPUs to accelerate AI development, achieving performance that previously required supercomputers.

In 2012, a breakthrough came when Alex Krizhevsky from the University of Toronto used NVIDIA GPUs to win the ImageNet image recognition competition. His neural network, AlexNet, trained on a million images, crushed the competition, beating handcrafted software written by vision experts.

This marked a seismic shift in technology. What once seemed like science fiction — computers learning and adapting from vast amounts of data — was now a reality, driven by the raw power of GPUs.

By 2015, AI had reached superhuman levels of perception, with Google, Microsoft and Baidu surpassing human performance in tasks like image recognition and speech understanding — all powered by deep neural networks running on GPUs.

In 2016, NVIDIA CEO Jensen Huang donated the first NVIDIA DGX-1 AI supercomputer — a system packed with eight cutting-edge GPUs — to OpenAI, which would harness GPUs to train ChatGPT, launched in November 2022.

In 2018, NVIDIA debuted GeForce RTX (20 Series) with RT Cores and Tensor Cores, designed specifically for real-time ray tracing and AI workloads.

This innovation accelerated the adoption of ray-traced graphics in games, bringing cinematic realism to gaming visuals and AI-powered features like NVIDIA DLSS, which enhanced gaming performance by leveraging deep learning.

Meanwhile, ChatGPT, launched in 2022, would go on to reach more than 100 million users within months of its launch, demonstrating how NVIDIA GPUs continue to drive the transformative power of generative AI.

Today, GPUs aren’t only celebrated in the gaming world — they’ve become icons of tech culture, appearing in Reddit memes, Twitch streams, T-shirts at Comic-Con and even being immortalized in custom PC builds and digital fan art.

Shaping the Future

This revolution that began with the GeForce 256 continues to unfold today in gaming and entertainment, in personal computing where AI powered by NVIDIA GPUs is now part of everyday life — and inside the trillion-dollar industries building next-generation AI into the core of their businesses.

GPUs are not just enhancing gaming but are designing the future of AI itself.

And now, with innovations like NVIDIA DLSS, which uses AI to boost gaming performance and deliver sharper images, and NVIDIA ACE, designed to bring more lifelike interactions to in-game characters, AI is once again reshaping the gaming world.

The GeForce 256 laid the bedrock for a future where gaming, computing, and AI are not just evolving — together, they’re transforming the world.

Read More

AI’ll Be by Your Side: Mental Health Startup Enhances Therapist-Client Connections

AI’ll Be by Your Side: Mental Health Startup Enhances Therapist-Client Connections

Half of the world’s population will experience a mental health disorder — but the median number of mental health workers per 100,000 people is just 13, according to the World Health Organization.

To help tackle this disparity — which can vary by over 40x between high-income and low-income countries — a Madrid-based startup is offering therapists AI tools to improve the delivery of mental health services.

Therapyside, a member of the NVIDIA Inception program for cutting-edge startups, is bolstering its online therapy platform using NVIDIA NIM inference microservices. These AI microservices serve as virtual assistants and notetakers, letting therapists focus on connecting with their clients.

“In a therapy setting, having a strong alliance between counselor and client is everything,” said Alessandro De Sario, founder and CEO of Therapyside. “When a therapist can focus on the session without worrying about note-taking, they can reach that level of trust and connection much quicker.”

For the therapists and clients who have opted in to test these AI tools, a speech recognition model transcribes their conversations. A large language model summarizes the session into clinical notes, saving time for therapists so they can speak with more clients and work more efficiently. Another model powers a virtual assistant, dubbed Maia, that can answer therapists’ questions using retrieval-augmented generation, aka RAG.

Therapyside aims to add features over time, such as support for additional languages and an offline version that can transcribe and summarize in-person therapy sessions.

“We’ve just opened the door,” said De Sario. “We want to make the tool much more powerful so it can handle administrative tasks like calendar management and patient follow-up, or remind therapists of topics they should cover in a given session.”

AI’s in Session: Enhancing Therapist-Client Relationships

Therapyside, founded in 2017, works with around 1,000 licensed therapists in Europe offering counseling in English, Italian and Spanish. More than 500,000 therapy sessions have been completed through its virtual platform to date.

The company’s AI tools are currently available through a beta program. Therapists who choose to participate can invite their clients to opt in to the AI features.

“It’s incredibly helpful to have a personalized summary with a transcription that highlights the most important points from each session I have with my patients,” said Alejandro A., one of the therapists participating in the beta program. “I’ve been pleasantly surprised by its ability to identify the most significant areas to focus on with each patient.”

Screen capture of Therapyside session with transcription running live
A speech recognition AI model can capture live transcriptions of sessions.

The therapists testing the tool rated the transcriptions and summaries as highly accurate, helping them focus on listening without worrying about note-taking.

“The recaps allow me to be fully present with the clients in my sessions,” said Maaria A., another therapist participating in the beta program.

During sessions, clients share details about their life experiences that are captured in the AI-powered transcriptions and summaries. Therapyside’s RAG-based Maia connects to these resources to help therapists quickly recall minutiae like the name of a client’s sibling, or track how a client’s main challenges have evolved over time. This information can help therapists pose more personalized questions and provide better support.

“Maia is a valuable tool to have when you’re feeling a little stuck,” said Maaria A. “I have clients all over the world, so Maia helps remind me where they live. And if I ask Maia to suggest exercises clients could do to boost their self-esteem, it helps me find resources I can send to them, which helps save time.”

Screen capture of a therapist Q&A with the Maia virtual assistant
Maia can answer therapist’s questions based on session transcripts and summaries.

Take Note: AI Microservices Enable Easy Deployment

Therapyside’s AI pipeline runs on NVIDIA GPUs in a secure cloud environment and is built with NVIDIA NIM, a set of easy-to-use microservices designed to speed up AI deployment.

For transcription, the pipeline uses NVIDIA Riva NIM microservices, which include NVIDIA Parakeet, a record-setting family of models, to deliver highly accurate automatic speech recognition. Flowchart illustrating Therapyside's AI pipeline

Once the transcript is complete, the text is processed by a NIM microservice for Meta’s Llama 3.1 family of open-source AI models to generate a summary that’s added to the client’s clinical history.

The Maia virtual assistant, which also uses a Llama 3.1 NIM microservice, accesses these clinical records using a RAG pipeline powered by NVIDIA NeMo Retriever NIM microservices. RAG techniques enable organizations to connect AI models to their private datasets to deliver contextually accurate responses.

Therapyside plans to further customize Maia with capabilities that support specific therapeutic methods, such as cognitive behavioral therapy and psychodynamic therapy. The team is also integrating NVIDIA NeMo Guardrails to further enhance the tools’ safety and security.

Kimberly Powell, vice president of healthcare at NVIDIA, will discuss Therapyside and other healthcare innovators in a keynote address at HLTH, a conference taking place October 20-23 in Las Vegas.

Learn more about NVIDIA Inception and get started with NVIDIA NIM microservices at ai.nvidia.com.

Read More

The Next Chapter Awaits: Dive Into ‘Diablo IV’s’ Latest Adventure ‘Vessel of Hatred’ on GeForce NOW

The Next Chapter Awaits: Dive Into ‘Diablo IV’s’ Latest Adventure ‘Vessel of Hatred’ on GeForce NOW

Prepare for a devilishly good time this GFN Thursday as the critically acclaimed Diablo IV: Vessel of Hatred downloadable content (DLC) joins the cloud, one of six new games available this week.

GeForce NOW also extends its game-library sync feature to Battle.net accounts, so members can seamlessly bring their favorite Blizzard games into their cloud-streaming libraries.

Hell’s Bells and Whistles

Get ready to rage. New DLC for the hit title Diablo IV: Vessel of Hatred is available to stream at launch this week, with thrilling content and gameplay for GeForce NOW members to experience.

Diablo IV Vessel of Hatred DLC on GeForce NOW
Hate is in the air.

Diablo IV: Vessel of Hatred DLC is the highly anticipated expansion of the latest installment in Blizzard’s iconic action role-playing game series. It introduces players to the lush and dangerous jungles of Nahantu. Teeming with both beauty and dangers, this new environment offers a fresh backdrop for action-packed battles against the demonic forces of Hell. A new playable class, the Spiritborn, offers unique gameplay mechanics tied to four guardian spirits: the eagle, gorilla, jaguar and centipede.

The DLC extends the main Diablo IV story and includes new features such as recruitable Mercenaries, a Player vs. Everyone co-op endgame activity, Party Finder to help members team up and take down challenges together, and more. Vessel of Hatred arrives alongside major updates including revamped leveling, a new difficulty system and Paragon adjustments that will continue to enhance the world of Diablo IV.

Ultimate members can experience the wrath at up to 4K resolution and 120 frames per second with support for NVIDIA DLSS and ray-tracing technologies. And members can jump right into the latest DLC without having to wait around for updates. Hell never looked so good, even on low-powered devices.

Let That Sync In

Battle.net game sync on GeForce NOW
Connection junction.

With game syncing for Blizzard’s Battle.net game library coming to GeForce NOW this week, members can connect their digital game store accounts so that all of their supported games are part of their streaming libraries.

Members can now easily find and stream popular titles such as StarCraft II, Overwatch 2, Call of Duty HQ and Hearthstone from their cloud gaming libraries, enhancing the games’ accessibility across a variety of devices.

Battle.net joins other digital storefronts that already have game sync support, including Steam, Epic Games Store, Xbox and Ubisoft Connect. This allows members to consolidate their gaming experiences in one place.

Plus, GeForce NOW members can play high-quality titles without the need for high-end hardware, streaming from GeForce RTX-powered servers in the cloud. Whether battling demons in Sanctuary or engaging in epic firefights, GeForce NOW members get a seamless gaming experience anytime, anywhere.

Hot and New

Europa on GeForce NOW
Soar through serenity and uncover destiny, all from the cloud.

Europa is a peaceful game of adventure, exploration and meditation from Future Friends Games, ready for members to stream at launch this week. On the moon Europa, a lush terraformed paradise in Jupiter’s shadow, an android named Zee sets out in search of answers. Run, glide and fly across the landscape, solve mysteries in the ruins of a fallen utopia, and discover the story of the last human alive.

Members can look for the following games available to stream in the cloud this week:

  • Empyrion – Galactic Survival (New release on Epic Games Store, Oct. 10)
  • Europa (New release on Steam, Oct. 11)
  • Dwarven Realms (Steam)
  • Star Trek Timelines (Steam)
  • Star Trucker (Steam)
  • Starcom: Unknown Space (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More