NVIDIA AI Now Available in Oracle Cloud Marketplace

NVIDIA AI Now Available in Oracle Cloud Marketplace

Training generative AI models just got easier.

NVIDIA DGX Cloud AI supercomputing platform and NVIDIA AI Enterprise software are now available in Oracle Cloud Marketplace, making it possible for Oracle Cloud Infrastructure customers to access high-performance accelerated computing and software to run secure, stable and supported production AI in just a few clicks.

The addition — an industry first — brings new capabilities for end-to-end development and deployment on Oracle Cloud. Enterprises can get started from the Oracle Cloud Marketplace to train models on DGX Cloud, and then deploy their applications on OCI with NVIDIA AI Enterprise.

Oracle Cloud and NVIDIA Lift Industries Into Era of AI

Thousands of enterprises around the world rely on OCI to power the applications that drive their businesses. Its customers include leaders across industries such as healthcare, scientific research, financial services, telecommunications and more.

Oracle Cloud Marketplace is a catalog of solutions that offers customers flexible consumption models and simple billing. Its addition of DGX Cloud and NVIDIA AI Enterprise lets OCI customers use their existing cloud credits to integrate NVIDIA’s leading AI supercomputing platform and software into their development and deployment pipelines.

With DGX Cloud, OCI customers can train models for generative AI applications like intelligent chatbots, search, summarization and content generation.

The University at Albany, in upstate New York, recently launched its AI Plus initiative, which is integrating teaching and learning about AI across the university’s research and academic enterprise, in fields such as cybersecurity, weather prediction, health data analytics, drug discovery and next-generation semiconductor design. It will also foster collaborations across the humanities, social sciences, public policy and public health. The university is using DGX Cloud AI supercomputing instances on OCI as it builds out an on-premises supercomputer.

“We’re accelerating our mission to infuse AI into virtually every academic and research disciplines,” said Thenkurussi (Kesh) Kesavadas, vice president for research and economic development at UAlbany. “We will drive advances in healthcare, security and economic competitiveness, while equipping students for roles in the evolving job market.”

NVIDIA AI Enterprise brings the software layer of the NVIDIA AI platform to OCI. It includes NVIDIA NeMo frameworks for building LLMs, NVIDIA RAPIDS for data science and NVIDIA TensorRT-LLM and NVIDIA Triton Inference Server for supercharging production AI. NVIDIA software for cybersecurity, computer vision, speech AI and more is also included. Enterprise-grade support, security and stability ensure a smooth transition of AI projects from pilot to production.

NVIDIA DGX Cloud generative AI training
NVIDIA DGX Cloud provides enterprises immediate access to AI supercomputing platform and software hosted by their preferred cloud provider.

AI Supercomputing Platform Hosted by OCI

NVIDIA DGX Cloud provides enterprises immediate access to an AI supercomputing platform and software.

Hosted by OCI, DGX Cloud provides enterprises with access to multi-node training on NVIDIA GPUs, paired with NVIDIA AI software, for training advanced models for generative AI and other groundbreaking applications.

Each DGX Cloud instance consists of eight NVIDIA Tensor Core GPUs interconnected with network fabric, purpose-built for multi-node training. This high-performance computing architecture also includes industry-leading AI development software and offers direct access to NVIDIA AI expertise so businesses can train LLMs faster.

OCI customers access DGX Cloud using NVIDIA Base Command Platform, which gives developers access to an AI supercomputer through a web browser. By providing a single-pane view of the customer’s AI infrastructure, Base Command Platform simplifies the management of multinode clusters.

NVIDIA AI Enterprise software
NVIDIA AI Enterprise software powers secure, stable and supported production AI and data science.

Software for Secure, Stable and Supported Production AI

NVIDIA AI Enterprise enables rapid development and deployment of AI and data science.

With NVIDIA AI Enterprise on Oracle Cloud Marketplace, enterprises can efficiently build an application once and deploy it on OCI and their on-prem infrastructure, making a multi- or hybrid-cloud strategy cost-effective and easy to adopt. Since NVIDIA AI Enterprise is also included in NVIDIA DGX Cloud, customers can streamline the transition from training on DGX Cloud to deploying their AI application into production with NVIDIA AI Enterprise on OCI, since the AI software runtime is consistent across the environments.

Qualified customers can purchase NVIDIA AI Enterprise and NVIDIA DGX Cloud with their existing Oracle Universal Credits.

Visit NVIDIA AI Enterprise and NVIDIA DGX Cloud on the Oracle Cloud Marketplace to get started today.

Read More

Coming in Clutch: Stream ‘Counter-Strike 2’ From the Cloud for Highest Frame Rates

Coming in Clutch: Stream ‘Counter-Strike 2’ From the Cloud for Highest Frame Rates

Rush to the cloud — stream Counter-Strike 2 on GeForce NOW for the highest frame rates. Members can play through the newest chapter of Valve’s elite, competitive, first-person shooter from the cloud.

It’s all part of an action-packed GFN Thursday, with 22 more games joining the cloud gaming platform’s library, including Hot Wheels Unleashed 2 – Turbocharged.

“Rush B! Rush B!”

 

Counter-Strike 2 is the long-awaited upgrade to one of the most recognizable competitive first-person shooters in the world.

Building on the legacy of Counter-Strike: Global Offensive, the latest iteration brings the action to Valve’s long-anticipated Source 2 video game engine, promising enhanced graphical fidelity with a physically based rendering system for more realistic textures and materials, dynamic lighting, reflections and more.

Smoke grenades are now dynamic volumetric objects that can interact with their surroundings by reacting to lighting and other environmental effects. And smoke particles work with the unified lighting system, allowing for more realistic light and color.

Even better: GeForce NOW Ultimate members can take full advantage of NVIDIA Reflex for ultra-low-latency gameplay streaming from the cloud. Rush the objective with the squad on Counter-Strike 2’s remastered maps at up to 240 frames per second — a first for cloud gaming. Upgrade today for the Ultimate Counter-Strike experience.

Vroom, Vroom!

We’re going turbo.

There’s more action around every turn of the GeForce NOW library. Put the pedal to the metal in Hot Wheels Unleashed 2 – Turbocharged, one of 22 newly supported games joining this week:

  • Wizard With a Gun (New release on Steam, Oct. 17)
  • Alaskan Road Truckers (New release on Steam, Oct. 18)
  • Hellboy: Web of Wyrd (New release on Steam, Oct. 18)
  • AirportSim (New release on Steam, Oct. 19)
  • Eternal Threads (New release on Epic Games Store, Oct. 19)
  • Hot Wheels Unleashed 2 – Turbocharged (New release on Steam, Oct. 19)
  • Laika Aged Through Blood (New release on Steam, Oct. 19)
  • Battle Chasers: Nightwar (Xbox, available on Microsoft Store)
  • Black Skylands (Xbox, available on Microsoft Store)
  • Blair Witch (Xbox, available on Microsoft Store)
  • Chicory: A Colorful Tale (Xbox and available on PC Game Pass)
  • Dead by Daylight (Xbox and available on PC Game Pass)
  • Dune: Spice Wars (Xbox and available on PC Game Pass)
  • Everspace 2 (Xbox and available on PC Game Pass)
  • EXAPUNKS (Xbox and available on PC Game Pass)
  • Gungrave G.O.R.E (Xbox and available on PC Game Pass)
  • Railway Empire 2 (Xbox and available on PC Game Pass)
  • Techtonica (Xbox and available on PC Game Pass)
  • Teenage Mutant Ninja Turtles: Shredder’s Revenge (Xbox and available on PC Game Pass)
  • Torchlight III (Xbox and available on PC Game Pass)
  • Trine 5: A Clockwork Conspiracy (Epic Games Store)
  • Vampire Survivors (Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

NVIDIA Expands Robotics Platform to Meet the Rise of Generative AI

NVIDIA Expands Robotics Platform to Meet the Rise of Generative AI

Powerful generative AI models and cloud-native APIs and microservices are coming to the edge.

Generative AI is bringing the power of transformer models and large language models to virtually every industry. That reach now includes areas that touch edge, robotics and logistics systems: defect detection, real-time asset tracking, autonomous planning and navigation, human-robot interactions and more.

NVIDIA today announced major expansions to two frameworks on the NVIDIA Jetson platform for edge AI and robotics: the NVIDIA Isaac ROS robotics framework has entered general availability, and the NVIDIA Metropolis expansion on Jetson is coming next.

To accelerate AI application development and deployments at the edge, NVIDIA has also created a Jetson Generative AI Lab for developers to use with the latest open-source generative AI models.

More than 1.2 million developers and over 10,000 customers have chosen NVIDIA AI and the Jetson platform, including Amazon Web Services, Cisco, John Deere, Medtronic, Pepsico and Siemens.

With the rapidly evolving AI landscape addressing increasingly complicated scenarios, developers are being challenged by longer development cycles to build AI applications for the edge. Reprogramming robots and AI systems on the fly to meet changing environments, manufacturing lines and automation needs of customers is time-consuming and requires expert skills.

Generative AI offers zero-shot learning — the ability for a model to recognize things specifically unseen before in training — with a natural language interface to simplify the development, deployment and management of AI at the edge.

Transforming the AI Landscape

Generative AI dramatically improves ease of use by understanding human language prompts to make model changes. Those AI models are more flexible in detecting, segmenting, tracking, searching and even reprogramming — and  help outperform traditional convolutional neural network-based models.

Generative AI is expected to add $10.5 billion in revenue for manufacturing operations worldwide by 2033, according to ABI Research.

“Generative AI will significantly accelerate deployments of AI at the edge with better generalization, ease of use and higher accuracy than previously possible,” said Deepu Talla, vice president of embedded and edge computing at NVIDIA. “This largest-ever software expansion of our Metropolis and Isaac frameworks on Jetson, combined with the power of transformer models and generative AI, addresses this need.”

Developing With Generative AI at the Edge

The Jetson Generative AI Lab provides developers access to optimized tools and tutorials for deploying open-source LLMs, diffusion models to generate stunning interactive images, vision language models (VLMs) and vision transformers (ViTs) that combine vision AI and natural language processing to provide comprehensive understanding of the scene.

Developers can also use the NVIDIA TAO Toolkit to create efficient and accurate AI models for edge applications. TAO provides a low-code interface to fine-tune and optimize vision AI models, including ViT and vision foundational models. They can also customize and fine-tune foundational models like NVIDIA NV-DINOv2 or public models like OpenCLIP to create highly accurate vision AI models with very little data. TAO additionally now includes VisualChangeNet, a new transformer-based model for defect inspection.

Harnessing New Metropolis and Isaac Frameworks

NVIDIA Metropolis makes it easier and more cost-effective for enterprises to embrace world-class, vision AI-enabled solutions to improve critical operational efficiency and safety problems. The platform brings a collection of powerful application programming interfaces and microservices for developers to quickly develop complex vision-based applications.

More than 1,000 companies, including BMW Group, Pepsico, Kroger, Tyson Foods, Infosys and Siemens, are using NVIDIA Metropolis developer tools to solve Internet of Things, sensor processing and operational challenges with vision AI — and the rate of adoption is quickening. The tools have now been downloaded over 1 million times by those looking to build vision AI applications.

To help developers quickly build and deploy scalable vision AI applications, an expanded set of Metropolis APIs and microservices on NVIDIA Jetson will be available by year’s end.

Hundreds of customers use the NVIDIA Isaac platform to develop high-performance robotics solutions across diverse domains, including agriculture, warehouse automation, last-mile delivery and service robotics, among others.

At ROSCon 2023, NVIDIA announced major improvements to perception and simulation capabilities with new releases of Isaac ROS and Isaac Sim software. Built on the widely adopted open-source Robot Operating System (ROS), Isaac ROS brings perception to automation, giving eyes and ears to the things that move. By harnessing the power of GPU-accelerated GEMs, including visual odometry, depth perception, 3D scene reconstruction, localization and planning, robotics developers gain the tools needed to swiftly engineer robotic solutions tailored for a diverse range of applications.

Isaac ROS has reached production-ready status with the latest Isaac ROS 2.0 release, enabling developers to create and bring high-performance robotics solutions to market with Jetson.

“ROS continues to grow and evolve to provide open-source software for the whole robotics community,” said Geoff Biggs, CTO of the Open Source Robotics Foundation. “NVIDIA’s new prebuilt ROS 2 packages, launched with this release, will accelerate that growth by making ROS 2 readily available to the vast NVIDIA Jetson developer community.”

Delivering New Reference AI Workflows

Developing a production-ready AI solution entails optimizing the development and training of AI models tailored to specific use cases, implementing robust security features on the platform, orchestrating the application, managing fleets, establishing seamless edge-to-cloud communication and more.

NVIDIA announced a curated collection of AI reference workflows based on Metropolis and Isaac frameworks that enable developers to quickly adopt the entire workflow or selectively integrate individual components, resulting in substantial reductions in both development time and cost. The three distinct AI workflows include: Network Video Recording, Automatic Optical Inspection and Autonomous Mobile Robot.

“NVIDIA Jetson, with its broad and diverse user base and partner ecosystem, has helped drive a revolution in robotics and AI at the edge,” said Jim McGregor, principal analyst at Tirias Research. “As application requirements become increasingly complex, we need a foundational shift to platforms that simplify and accelerate the creation of edge deployments. This significant software expansion by NVIDIA gives developers access to new multi-sensor models and generative AI capabilities.”

More Coming on the Horizon 

NVIDIA announced a collection of system services which are fundamental capabilities that every developer requires when building edge AI solutions. These services will simplify integration into workflows and spare developer the arduous task of building them from the ground up.

The new NVIDIA JetPack 6, expected to be available by year’s end, will empower AI developers to stay at the cutting edge of computing without the need for a full Jetson Linux upgrade, substantially expediting development timelines and liberating them from Jetson Linux dependencies. JetPack 6 will also use the collaborative efforts with Linux distribution partners to expand the horizon of Linux-based distribution choices, including Canonical’s Optimized and Certified Ubuntu, Wind River Linux, Concurrent Real’s Redhawk Linux and various Yocto-based distributions.

Partner Ecosystem Benefits From Platform Expansion

The Jetson partner ecosystem provides a wide range of support, from hardware, AI software and application design services to sensors, connectivity and developer tools. These NVIDIA Partner Network innovators play a vital role in providing the building blocks and sub-systems for many products sold on the market.

The latest release allows Jetson partners to accelerate their time to market and expand their customer base by adopting AI with increased performance and capabilities.

Independent software vendor partners will also be able to expand their offerings for Jetson.

Join us Tuesday, Nov. 7, at 9 a.m. PT for the Bringing Generative AI to Life with NVIDIA Jetson webinar, where technical experts will dive deeper into the news announced here, including accelerated APIs and quantization methods for deploying LLMs and VLMs on Jetson, optimizing vision transformers with TensorRT, and more.

Sign up for NVIDIA Metropolis early access here.

 

 

 

 

 

 

 

Read More

Making Machines Mindful: NYU Professor Talks Responsible AI

Making Machines Mindful: NYU Professor Talks Responsible AI

Artificial intelligence is now a household term. Responsible AI is hot on its heels.

Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous.

In the latest episode of the NVIDIA AI Podcast, host Noah Kravitz ‌spoke with Stoyanovich about responsible AI, her advocacy efforts and how people can help.

Stoyanovich started her work at the Center for Responsible AI with basic research. She soon realized that what was needed were better guardrails, not just more algorithms.

As AI’s potential has grown, along with the ethical concerns surrounding its use, Stoyanovich clarifies that the “responsibility” lies with people, not AI.

“The responsibility refers to people taking responsibility for the decisions that we make individually and collectively about whether to build an AI system and how to build, test, deploy and keep it in check,” she said.

AI ethics is a related concern, used to refer to “the embedding of moral values and principles into the design, development and use of the AI,” she added.

Lawmakers have taken notice. For example, New York recently implemented a law that makes job candidate screening more transparent.

According to Stoyanovich, “the law is not perfect,” but “we can only learn how to regulate something if we try regulating” and converse openly with the “people at the table being impacted.”

Stoyanovich wants two things: for people to recognize that AI can’t predict human choices and that AI systems be transparent and accountable, carrying a “nutritional label.”

That process should include considerations on who is using AI tools, how they’re used to make decisions and who is subjected to those decisions, she said.

Stoyanovich urges people to “start demanding actions and explanations to understand” how AI is used at local, state and federal levels.

“We need to teach ourselves to help others learn about what AI is and why we should care,” she said. “So please get involved in how we govern ourselves, because we live in a democracy. We have to step up.”

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunesGoogle PodcastsGoogle PlayCastbox, DoggCatcher, OvercastPlayerFM, Pocket Casts, PodbayPodBean, PodCruncher, PodKicker, SoundcloudSpotifyStitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

Into the Omniverse: Marmoset Brings Breakthroughs in Rendering, Extends OpenUSD Support to Enhance 3D Art Production

Into the Omniverse: Marmoset Brings Breakthroughs in Rendering, Extends OpenUSD Support to Enhance 3D Art Production

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists and developers from startups to enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Real-time rendering, animation and texture baking are essential workflows for 3D art production. Using the Marmoset Toolbag software, 3D artists can enhance their creative workflows and build complex 3D models without disruptions to productivity.

The latest release of Marmoset Toolbag, version 4.06, brings increased support for Universal Scene Description, aka OpenUSD, enabling seamless compatibility with NVIDIA Omniverse, a development platform for connecting and building OpenUSD-based tools and applications.

3D creators and technical artists using Marmoset can now enjoy improved interoperability, accelerated rendering, real-time visualization and efficient performance —  redefining the possibilities of their creative workflows.

Enhancing Cross-Platform Creativity With OpenUSD

Creators are taking their workflows to the next level with OpenUSD.

Berlin-based Armin Halač works as a principal animator at Wooga, a mobile games development studio known for projects like June’s Journey and Ghost Detective. The nature of his job means Halač is no stranger to 3D workflows — he gets hands-on with animation and character rigging.

For texturing and producing high-quality renders, Marmoset is Halač’s go-to tool, providing a user-friendly interface and powerful features to simplify his workflow. Recently, Halač used Marmoset to create the captivating cover image for his book, A Complete Guide to Character Rigging for Games Using Blender.

Using the added support for USD, Halač can seamlessly send 3D assets from Blender to Marmoset, creating new possibilities for collaboration and improved visuals.

The cover image of Halač’s book.

Nkoro Anselem Ire, a.k.a askNK, is a popular YouTube creator as well as a media and visual arts professor at a couple of universities who is also seeing workflow benefits from increased USD support.

As a 3D content creator, he uses Marmoset Toolbag for the majority of his PBR workflow — from texture baking and lighting to animation and rendering. Now, with USD, askNK is enjoying newfound levels of creative flexibility as the framework allows him to “collaborate with individuals or team members a lot easier because they can now pick up and drop off processes while working on the same file.”

Halač and askNK recently joined an NVIDIA-hosted livestream where community members and the Omniverse team explored the benefits of a Marmoset- and Omniverse-boosted workflow.

Daniel Bauer is another creator experiencing the benefits of Marmoset, OpenUSD and Omniverse. A SolidWorks mechanical engineer with over 10 years of experience, Bauer works frequently in CAD software environments, where it’s typical to assign different materials to various scene components. The variance can often lead to shading errors and incorrect geometry representation, but using USD, Bauer can avoid errors by easily importing versions of his scene from Blender to Marmoset Toolbag to Omniverse USD Composer.

A Kuka Scara robot simulation with 10 parallel small grippers for sorting and handling pens.

Additionally, 3D artists Gianluca Squillace and Pasquale Scionti are harnessing the collaborative power of Omniverse, Marmoset and OpenUSD to transform their workflows from a convoluted series of exports and imports to a streamlined, real-time, interconnected process.

Squillace crafted a captivating 3D character with Pixologic ZBrush, Autodesk Maya, Adobe Substance 3D Painter and Marmoset Toolbag — aggregating the data from the various tools in Omniverse. With USD, he seamlessly integrated his animations and made real-time adjustments without the need for constant file exports.

Simultaneously, Scionti constructed a stunning glacial environment using Autodesk 3ds Max, Adobe Substance 3D Painter, Quixel and Unreal Engine, uniting the various pieces from his tools in Omniverse. His work showcased the potential of Omniverse to foster real-time collaboration as he was able to seamlessly integrate Squillace’s character into his snowy world.

Advancing Interoperability and Real-Time Rendering

Marmoset Toolbag 4.06 provides significant improvements to interoperability and image fidelity for artists working across platforms and applications. This is achieved through updates to Marmoset’s OpenUSD support, allowing for seamless compatibility and connection with the Omniverse ecosystem.

The improved USD import and export capabilities enhance interoperability with popular content creation apps and creative toolkits like Autodesk Maya and Autodesk 3ds Max, SideFX Houdini and Unreal Engine.

Additionally, Marmoset Toolbag 4.06 brings additional updates, including:

  • RTX-accelerated rendering and baking: Toolbag’s ray-traced renderer and texture baker are accelerated by NVIDIA RTX GPUs, providing up to a 2x improvement in render times and a 4x improvement in bake times.
  • Real-time denoising with OptiX: With NVIDIA RTX devices, creators can enjoy a smooth and interactive ray-tracing experience, enabling real-time navigation of the active viewport without visual artifacts or performance disruptions.
  • High DPI performance with DLSS image upscaling: The viewport now renders at a reduced resolution and uses AI-based technology to upscale images, improving performance while minimizing image-quality reductions.

Download Toolbag 4.06 directly from Marmoset to explore USD support and RTX-accelerated production tools. New users are eligible for a full-featured, 30-day free trial license.

Get Plugged Into the Omniverse 

Learn from industry experts on how OpenUSD is enabling custom 3D pipelines, easing 3D tool development and delivering interoperability between 3D applications in sessions from SIGGRAPH 2023, now available on demand.

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Explore the Omniverse ecosystem’s growing catalog of connections, extensions, foundation applications and third-party tools.

For more resources on OpenUSD, explore the Alliance for OpenUSD forum or visit the AOUSD website.

Share your Marmoset Toolbag and Omniverse work as part of the latest community challenge, #SeasonalArtChallenge. Use the hashtag to submit a spooky or festive scene for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.

Get started with NVIDIA Omniverse by downloading the standard license free, or learn how Omniverse Enterprise can connect your team

Developers can check out these Omniverse resources to begin building on the platform. 

Stay up to date on the platform by subscribing to the newsletter and following NVIDIA Omniverse on Instagram, LinkedIn, Medium, Threads and Twitter.

For more, check out our forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Armin Halač, Christian Nauck and Masuquddin Ahmed.

Read More

Foxconn and NVIDIA Amp Up Electric Vehicle Innovation

Foxconn and NVIDIA Amp Up Electric Vehicle Innovation

NVIDIA founder and CEO Jensen Huang joined Hon Hai (Foxconn) Chairman and CEO Young Liu to unveil the latest in their ongoing partnership to develop the next wave of intelligent electric vehicle (EV) platforms for the global automotive market.

This latest move, announced today at the fourth annual Hon Hai Tech Day in Taiwan, will help Foxconn realize its EV vision with a range of NVIDIA DRIVE solutions — including NVIDIA DRIVE Orin today and its successor, DRIVE Thor, down the road.

In addition, Foxconn will be a contract manufacturer of highly automated and autonomous, AI-rich EVs featuring the upcoming NVIDIA DRIVE Hyperion 9 platform, which includes DRIVE Thor and a state-of-the-art sensor architecture.

Next-Gen EVs With Extraordinary Performance  

The computational requirements for highly automated and fully self-driving vehicles are enormous. NVIDIA offers the most advanced, highest-performing AI car computers for the transportation industry, with DRIVE Orin selected for use by more than 25 global automakers.

Already a tier-one manufacturer of DRIVE Orin-powered electronic control units (ECUs), Foxconn will also manufacture ECUs featuring DRIVE Thor, once available.

The upcoming DRIVE Thor superchip harnesses advanced AI capabilities first deployed in NVIDIA Grace CPUs and Hopper and Ada Lovelace architecture-based GPUs — and is expected to deliver a staggering 2,000 teraflops of high-performance compute to enable functionally safe and secure intelligent driving.

Next-generation NVIDIA DRIVE Thor.

Heightened Senses

Unveiled at GTC last year, DRIVE Hyperion 9 is the latest evolution of NVIDIA’s modular development platform and reference architecture for automated and autonomous vehicles. Set to be powered by DRIVE Thor, it will integrate a qualified sensor architecture for level 3 urban and level 4 highway driving scenarios.

With a diverse and redundant array of high-resolution camera, radar, lidar and ultrasonic sensors, DRIVE Hyperion can process an extraordinary amount of safety-critical data to enable vehicles to deftly navigate their surroundings.

Another advantage of DRIVE Hyperion is its compatibility across generations, as it retains the same compute form factor and NVIDIA DriveWorks application programming interfaces, enabling a seamless transition from DRIVE Orin to DRIVE Thor and beyond.

Plus, DRIVE Hyperion can help speed development times and lower costs for electronics manufacturers like Foxconn, since the sensors available on the platform have cleared NVIDIA’s rigorous qualification processes.

The shift to software-defined vehicles with a centralized electronic architecture will drive the need for high-performance, energy-efficient computing solutions such as DRIVE Thor. By coupling it with the DRIVE Hyperion sensor architecture, Foxconn and its automotive customers will be better equipped to realize a new era of safe and intelligent EVs.

Since its inception, Hon Hai Tech Day has served as a launch pad for Foxconn to showcase its latest endeavors in contract design and manufacturing services and new technologies. These accomplishments span the EV sector and extend to the broader consumer electronics industry.

Catch more on Liu and Huang’s fireside chat at Hon Hai Tech Day.

Read More

Striking Performance: Large Language Models up to 4x Faster on RTX With TensorRT-LLM for Windows

Striking Performance: Large Language Models up to 4x Faster on RTX With TensorRT-LLM for Windows

Generative AI is one of the most important trends in the history of personal computing, bringing advancements to gaming, creativity, video, productivity, development and more.

And GeForce RTX and NVIDIA RTX GPUs, which are packed with dedicated AI processors called Tensor Cores, are bringing the power of generative AI natively to more than 100 million Windows PCs and workstations.

Today, generative AI on PC is getting up to 4x faster via TensorRT-LLM for Windows, an open-source library that accelerates inference performance for the latest AI large language models, like Llama 2 and Code Llama. This follows the announcement of TensorRT-LLM for data centers last month.

NVIDIA has also released tools to help developers accelerate their LLMs, including scripts that optimize custom models with TensorRT-LLM, TensorRT-optimized open-source models and a developer reference project that showcases both the speed and quality of LLM responses.

TensorRT acceleration is now available for Stable Diffusion in the popular Web UI by Automatic1111 distribution. It speeds up the generative AI diffusion model by up to 2x over the previous fastest implementation.

Plus, RTX Video Super Resolution (VSR) version 1.5 is available as part of today’s Game Ready Driver release — and will be available in the next NVIDIA Studio Driver, releasing early next month.

Supercharging LLMs With TensorRT

LLMs are fueling productivity — engaging in chat, summarizing documents and web content, drafting emails and blogs — and are at the core of new pipelines of AI and other software that can automatically analyze data and generate a vast array of content.

TensorRT-LLM, a library for accelerating LLM inference, gives developers and end users the benefit of LLMs that can now operate up to 4x faster on RTX-powered Windows PCs.

At higher batch sizes, this acceleration significantly improves the experience for more sophisticated LLM use — like writing and coding assistants that output multiple, unique auto-complete results at once. The result is accelerated performance and improved quality that lets users select the best of the bunch.

TensorRT-LLM acceleration is also beneficial when integrating LLM capabilities with other technology, such as in retrieval-augmented generation (RAG), where an LLM is paired with a vector library or vector database. RAG enables the LLM to deliver responses based on a specific dataset, like user emails or articles on a website, to provide more targeted answers.

To show this in practical terms, when the question “How does NVIDIA ACE generate emotional responses?” was asked of the LLaMa 2 base model, it returned an unhelpful response.

Better responses, faster.

Conversely, using RAG with recent GeForce news articles loaded into a vector library and connected to the same Llama 2 model not only returned the correct answer — using NeMo SteerLM — but did so much quicker with TensorRT-LLM acceleration. This combination of speed and proficiency gives users smarter solutions.

TensorRT-LLM will soon be available to download from the NVIDIA Developer website. TensorRT-optimized open source models and the RAG demo with GeForce news as a sample project are available at ngc.nvidia.com and GitHub.com/NVIDIA.

Automatic Acceleration

Diffusion models, like Stable Diffusion, are used to imagine and create stunning, novel works of art. Image generation is an iterative process that can take hundreds of cycles to achieve the perfect output. When done on an underpowered computer, this iteration can add up to hours of wait time.

TensorRT is designed to accelerate AI models through layer fusion, precision calibration, kernel auto-tuning and other capabilities that significantly boost inference efficiency and speed. This makes it indispensable for real-time applications and resource-intensive tasks.

And now, TensorRT doubles the speed of Stable Diffusion.

Compatible with the most popular distribution, WebUI from Automatic1111, Stable Diffusion with TensorRT acceleration helps users iterate faster and spend less time waiting on the computer, delivering a final image sooner. On a GeForce RTX 4090, it runs 7x faster than the top implementation on Macs with an Apple M2 Ultra. The extension is available for download today.

The TensorRT demo of a Stable Diffusion pipeline provides developers with a reference implementation on how to prepare diffusion models and accelerate them using TensorRT. This is the starting point for developers interested in turbocharging a diffusion pipeline and bringing lightning-fast inferencing to applications.

Video That’s Super

AI is improving everyday PC experiences for all users. Streaming video — from nearly any source, like YouTube, Twitch, Prime Video, Disney+ and countless others — is among the most popular activities on a PC. Thanks to AI and RTX, it’s getting another update in image quality.

RTX VSR is a breakthrough in AI pixel processing that improves the quality of streamed video content by reducing or eliminating artifacts caused by video compression. It also sharpens edges and details.

Available now, RTX VSR version 1.5 further improves visual quality with updated models, de-artifacts content played in its native resolution and adds support for RTX GPUs based on the NVIDIA Turing architecture — both professional RTX and GeForce RTX 20 Series GPUs.

Retraining the VSR AI model helped it learn to accurately identify the difference between subtle details and compression artifacts. As a result, AI-enhanced images more accurately preserve details during the upscaling process. Finer details are more visible, and the overall image looks sharper and crisper.

RTX Video Super Resolution v1.5 improves detail and sharpness.

New with version 1.5 is the ability to de-artifact video played at the display’s native resolution. The original release only enhanced video when it was being upscaled. Now, for example, 1080p video streamed to a 1080p resolution display will look smoother as heavy artifacts are reduced.

RTX VSR now de-artifacts video played at its native resolution.

RTX VSR 1.5 is available today for all RTX users in the latest Game Ready Driver. It will be available in the upcoming NVIDIA Studio Driver, scheduled for early next month.

RTX VSR is among the NVIDIA software, tools, libraries and SDKs — like those mentioned above, plus DLSS, Omniverse, AI Workbench and others — that have helped bring over 400 AI-enabled apps and games to consumers.

The AI era is upon us. And RTX is supercharging at every step in its evolution.

Read More

NVIDIA RTX Video Super Resolution Update Enhances Video Quality, Detail Preservation and Expands to GeForce RTX 20 Series GPUs

NVIDIA RTX Video Super Resolution Update Enhances Video Quality, Detail Preservation and Expands to GeForce RTX 20 Series GPUs

NVIDIA today announced an update to RTX Video Super Resolution (VSR) that delivers greater overall graphical fidelity with preserved details, upscaling for native videos and support for GeForce RTX 20 Series desktop and laptop GPUs.

For AI assists from RTX VSR and more — from enhanced creativity and productivity to blisteringly fast gaming — check out the RTX for AI page.

Plus, this week In the NVIDIA Studio, Twitch personality Runebee shares her inspiration, streaming tips and how she uses AI and RTX GPU acceleration.

And don’t forget to join the #SeasonalArtChallenge by submitting spooky Halloween-themed art in October and harvest- and fall-themed pieces in November. For inspiration, check out the hauntingly adorable work of artists like iryna.blender3d on Twitter.

The Super RTX VSR Update 1.5

RTX VSR’s AI model has been retrained to more accurately identify the difference between subtle details and compression artifacts to better preserve image details during the upscaling process. Finer details are more visible, and the overall image looks sharper and crisper than before.

RTX VSR v1.5 improves detail and sharpness.

RTX VSR version 1.5 will also de-artifact videos played at their native resolution — prior, only upscaled video could be enhanced. Providing a leap in graphical fidelity for laptop owners with 1080p screens, the updated RTX VSR makes 1080p resolution, which is popular for content and displays, look smoother at its native resolution, even with heavy artifacts.

RTX VSR now de-artifacts video played at native resolution.

And with expanded RTX VSR support, owners of GeForce RTX 20 Series GPUs can benefit from the same AI-enhanced video as those using RTX 30 and 40 Series GPUs.

RTX VSR 1.5 is available as part of the latest Game Ready Driver, available for download today. Content creators downloading NVIDIA Studio Drivers — designed to enhance features, reduce repetitiveness and dramatically accelerate creative workflows — can install the driver with RTX VSR releasing in early November.

Runebee-lievable Streaming

Runebee has been livestreaming for over 10 years, providing a space for viewers to hang out and talk about games, movies or whatever else is going on in life. Over the years, she’s realized how common a desire for escapism is.

“Things aren’t always sunshine and rainbows, so it’s nice to have some company that can help take your mind off things,” said Runebee.

Runebee has amassed over 100K followers on Twitch, YouTube and Instagram, crediting her success to thorough preparation of her setup. Her technology-forward approach ensures efficiency and reliability — allowing her focus to be on performance.

“There’s a lot of planning involved in streaming, but at the end of the day, hitting the ‘start streaming’ button is the most important step, and NVIDIA GPU-acceleration is a massive factor in allowing it to go as smoothly as it does,” said Runebee.

“I never thought I’d have this smooth of a stream just by upgrading to a GeForce RTX 40 Series GPU.” – Runebee

OBS is Runbee’s preferred open-source software for video recording and livestreaming on Twitch. For maximum efficiency, Runebee deploys her GeForce RTX 4080 RTX GPU, taking advantage of the eighth-generation NVIDIA encoder, NVENC, to independently encode video, which frees up the graphics card to focus on livestreaming.

“Streaming games and running OBS used to kill my CPU, and NVENC has taken so much stress off,” said Runebee. “I was hardly even able to stream PC games until I switched to NVENC.”

For livestreamers, RTX 40 Series GPUs can offer support for real-time AV1 hardware encoding, providing a 40% efficiency boost compared to H.264 and delivering higher quality than competing GPUs.

“As I started building more PCs with NVIDIA GPUs, I never had a reason to switch!” – Runebee

Runebee can export recordings of her livestreams with Adobe Premiere Pro in half the normally required time thanks to GeForce RTX 40 Series dual encoders working together, dividing the work evenly to double output.

They’re capable of recording up to 8K, 60 frames per second content in real time via GeForce Experience and OBS Studio.

Always looking to improve her livestreaming process, Runebee plans on experimenting with the NVIDIA Broadcast app, which transforms any room into a home studio by upgrading standard webcams, microphones and speakers into premium smart devices using the power of AI.

Runebee encourages those interested in livestreaming to at least give their potential passion project a shot. “It’s a great way to meet tons of new friends, become more articulate at describing the things you love — be it games or movies — and cultivate a community to share your passions with,” she said.

Twitch livestreamer Runebee’s setup.

Follow Runebee on Twitch.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. See notice regarding software product information.

Read More

From Skylines to Streetscapes: How SHoP Architects Brings Innovative Designs to Life

From Skylines to Streetscapes: How SHoP Architects Brings Innovative Designs to Life

At SHoP Architects, a New York City-based architectural firm, Mengyi Fan and her team aim to inspire industry professionals to create visual masterpieces by incorporating emerging technologies.

Fan, the director of visualization at SHoP, has expertise that spans the fields of architectural visualization and design. She takes a definitive, novel and enduring approach to designing and planning architecture for city skylines and streetscapes.

Fan and her team work on various architecture visualization projects, from still renderings to real-time walkthroughs. They use multiple creative applications throughout the course of their projects, including Adobe Photoshop, Autodesk 3ds Max, Autodesk Revit and Epic Games’ Unreal Engine. SHoP also collaborates directly with architects at project kickoff, providing images and animations that facilitate quicker decision-making during the design process.

The team consistently integrates new technologies that allow them to explore untapped innovation opportunities, as well as boost research and development. Fan often incorporates real-time and traditional rendering, extended reality and AI into her creative workflows.

To capture all the details that bring the designs together, SHoP uses NVIDIA RTX A5500. Fan is also part of the NVIDIA RTX Ambassador Program, which is designed to amplify the work of professionals from diverse industries who are using RTX technology. Equipped with the latest capabilities of RTX, Fan hopes to continue pushing boundaries in real-time visualization, AI and digital twin applications.

All images courtesy of SHoP Architects.

Redefining Creative Experiences 

3D models play a critical role as the single source of truth, which is why SHoP designers need advanced technology to help them create detailed models and visualizations without creativity or productivity slowdowns.

Previously, the team used CPU-based offerings, which limited the scope of work and research and development they could take on. But with RTX, ‌designers can create and communicate complex designs while continuously collaborating with others.

By tapping into RTX A5500, Fan can prioritize efficiency and high rendering quality without worrying about compute power limitations.

“NVIDIA’s professional RTX GPUs are currently known as the industry standard for graphics cards solutions,” said Fan. “RTX provides us with the performance and power needed to do all the above without worrying about hardware constraints.”

The advanced features of the RTX GPUs allow SHoP designers to explore new ways of representation.

SHoP Architects’ projects have grown in scale, location and diversity, and Fan and her team are constantly learning and adapting from each project, drawing inspiration from diverse areas such as automotive, aviation, film and gaming.

Fan views RTX-powered tools as a means of opening up diverse approaches and solutions to be more widely adopted within the industry. And as an NVIDIA RTX Ambassador, she aims to push past technological boundaries by connecting with like-minded designers and creatives.

See more of Fan’s work below. Discover how NVIDIA RTX can help enhance architectural workflows and learn more about the NVIDIA RTX Ambassador Program.

Read More

UK Tech Festival Showcases Startups Using AI for Creative Industries

UK Tech Festival Showcases Startups Using AI for Creative Industries

At one of the U.K.’s largest technology festivals, top enterprises and startups are this week highlighting their latest innovations, hosting workshops and celebrating the growing tech ecosystem based in the country’s southwest.

The Bristol Technology Festival today showcased the work of nine startups that recently participated in a challenge hosted by Digital Catapult — the U.K. authority on advanced digital technology — in collaboration with NVIDIA.

The challenge, which ran for four months, supported companies in developing a prototype or extending an innovation that could transform experiences using reality capture, real-time collaboration and creation, or cross-platform content delivery.

It’s part of MyWorld, an initiative for pioneering creative technology focused on the western U.K.

Each selected startup was given £50,000 to help develop projects that foster the advancement of generative AI, digital twins and other groundbreaking technologies for use in creative industries.

Lux Aeterna Explores Generative AI for Visual Effects

Emmy Award-winning independent visual effects studio Lux Aeterna — which is using gen AI and neural networks for VFX production — deployed its funds to develop a generative AI-powered text-to-image toolkit for creating maps, or 2D images used to represent aspects of a scene, object or effect.

At the Bristol Technology Festival, Lux Aeterna demonstrated this technology, powered by NVIDIA RTX 40 Series GPUs, with a focus on its ability to generate parallax occlusion maps, a method of creating the effect of depth for 3D textured surfaces.

“Our goal is to tackle the unique VFX challenges with bespoke AI-assisted solutions, and to put these tools of the future into the hands of our talented artists,” said James Pollock, creative technologist at Lux Aeterna. “NVIDIA’s insightful feedback on our work as a part of the MyWorld challenge has been invaluable in informing our strategy toward innovation in this rapidly changing space.”

Meaning Machine Brings AI to Game Characters, Dialogue

Meaning Machine, a studio pioneering gameplay that uses natural language AI, used its funds from the challenge to develop a generative AI system for in-game characters and dialogue. Its Game Consciousness technology enables in-game characters to accurately talk about their world, in real time, so that every line of dialogue reflects the game developer’s creative vision.

Meaning Machine’s demo at today’s showcase invited attendees to experience its interrogation game, “Dead Meat,” in which players must chat with an in-game character — a murder suspect — with the aim of manipulating them into giving a confession.

A member of the NVIDIA Inception program for cutting-edge startups, Meaning Machine powers its generative AI technology for game development using the NVIDIA NeMo framework for building, customizing and deploying large language models.

“NVIDIA NeMo enables us to deliver scalable model tuning and inference,” said Ben Ackland, cofounder and chief technology officer at Meaning Machine. “We see potential for Game Consciousness to transform blockbuster games — delivering next-gen characters that feel at home in bigger, deeper, more complex virtual worlds — and our collaboration with NVIDIA will help us make this a reality sooner.”

More Startups Showcase AI for Creative Industries

Additional challenge participants that hosted demos today at the Bristol Technology Festival include:

  • Black Laboratory, an NVIDIA Inception member demonstrating a live puppet-performance capture system, puppix, that can seamlessly transfer the physicality of puppets to digital characters.
  • IMPRESS, which is developing an AI-powered launchpad for self-publishing indie video games. It offers data-driven market research for game development, marketing campaign support, press engagement tools and more.
  • Larkhall, which is expanding Otto, its AI system that generates live, reactive visuals based on musical performances, as well as automatic, expressive captioning for speech-based performances.
  • Motion Impossible, which is building a software platform for centralized control of its AGITO systems — free-roaming, modular, camera dolly systems for filmmaking.
  • Zubr and Uninvited Guests, two companies collaborating on the development of augmented- and virtual-reality tools for designing futuristic urban environments.

“NVIDIA’s involvement in the MyWorld challenge, led by Digital Catapult, has created extraordinary value for the participating teams,” said Sarah Addezio, senior innovation partner and MyWorld program lead at Digital Catapult. “We’ve seen the benefit of our cohort having access to industry-leading technical and business-development expertise, elevating their projects in ways that would not have been possible otherwise.”

Learn more about NVIDIA Inception and NVIDIA generative AI technologies.

Read More