Bethesda’s ‘Fallout’ Titles Join GeForce NOW

Bethesda’s ‘Fallout’ Titles Join GeForce NOW

Welcome to the wasteland, Vault Dwellers. Bethesda’s Fallout 4 and Fallout 76 are bringing post-nuclear adventures to the cloud.

These highly acclaimed action role-playing games lead 10 new titles joining GeForce NOW this week.

Announced as coming to GeForce NOW at CES, Honkai: Star Rail is targeting a release this quarter. Stay tuned for future updates.

Vault Into the Cloud

Adventurers needed, whether for mapping the irradiated wasteland or shaping the fate of humanity.

Fallout 4 on GeForce NOW
Don’t let Dogmeat venture out alone.

Embark on a journey through ruins of the post-apocalyptic Commonwealth in Fallout 4. As the sole survivor of Vault 111, navigate a world destroyed by nuclear war, make choices to reshape the wasteland and rebuild society one settlement at a time. With a vast, open world, dynamic crafting systems and a gripping storyline, the game offers an immersive single-player experience that challenges dwellers to emerge as beacons of hope for humanity’s remnants.

Fallout 76 on GeForce NOW
Dust off your Pip-Boy and stream ‘Fallout 76’ from the cloud.

Plus, in Fallout 76, head back to the early days of post-nuclear Appalachia and experience the Fallout universe’s largest, most dynamic world. Encounter unique challenges, build portable player homes called C.A.M.P.s, and cooperate or compete with other survivors in the mountainous lands in West Virginia.

Join the proud ranks of Vault survivors in the cloud today and stream these titles, including Creation Club content for Fallout 4, across devices. With longer gaming sessions and faster access to servers, GeForce NOW members can play anywhere, anytime, and at up to 4K resolution, streaming with an Ultimate membership. The games come just in time for those tuning into the Fallout series TV adaptation, released today, for a Fallout-filled week.

Go Big or Go Home

Gigantic: Rampage Edition on GeForce NOW
Larger than life MOBA now streaming on GeForce NOW.

Gigantic: Rampage Edition promises big fun with epic 5v5 matches, crossplay support, an exciting roster of heroes and more. Rush to the cloud to jump into the latest game from Arc Games and team with four other players to control objectives and take down the opposing team’s mighty Guardian. Think fast, be bold and go gigantic!

Look forward to these new games this week:

  • Gigantic: Rampage Edition (New release on Steam, April 9)
  • Inkbound 1.0 (New release, on Steam, April 9)
  • Broken Roads (New release on Steam, April 10)
  • Infection Free Zone (New release on Steam, April 11)
  • Shadow of the Tomb Raider: Definitive Edition (New release on Xbox and available on PC Game Pass, April 11)
  • Backpack Battles (Steam)
  • Fallout 4 (Steam)
  • Fallout 76 (Steam and Xbox, available on PC Game Pass)
  • Ghostrunner (Epic Games Store, free April 11-18)
  • Terra Invicta (Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Combating Corruption With Data: Cleanlab and Berkeley Research Group on Using AI-Powered Investigative Analytics  

Combating Corruption With Data: Cleanlab and Berkeley Research Group on Using AI-Powered Investigative Analytics  

Talk about scrubbing data. Curtis Northcutt, cofounder and CEO of Cleanlab, and Steven Gawthorpe, senior data scientist at Berkeley Research Group, speak about Cleanlab’s groundbreaking approach to data curation with Noah Kravitz, host of NVIDIA’s AI Podcast, in an episode recorded live at the NVIDIA GTC global AI conference. The startup’s tools enhance data reliability and trustworthiness through sophisticated error identification and correction algorithms. Northcutt and Gawthorpe provide insights into how AI-powered data analytics can help combat economic crimes and corruption and discuss the intersection of AI, data science and ethical governance in fostering a more just society.

Cleanlab is a member of the NVIDIA Inception program for cutting-edge startups.

Stay tuned for more episodes recorded live from GTC.

Time Stamps

1:05: Northcutt on Cleanlab’s inception and mission
2:41: What Cleanlab offers its customers
4:24: The human element in Cleanlab’s data verification
8:57: Gawthorpe on the core functions, aims of the Berkeley Research Group
10:42: Gawthorpe’s approach to data collection and analysis in fraud investigations
16:38: Cleanlab’s one-click solution for generating machine learning models
18:30: The evolution of machine learning and its impact on data analytics
20:07: Future directions in data-driven crimefighting

You Might Also Like…

The Case for Generative AI in the Legal Field – Ep. 210

Thomson Reuters, the global content and technology company, is transforming the legal industry with generative AI. In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Thomson Reuters’ Chief Product Officer David Wong about its potential — and implications.

Making Machines Mindful: NYU Professor Talks Responsible AI – Ep. 205

Artificial intelligence is now a household term. Responsible AI is hot on its heels. Julia Stoyanovich, associate professor of computer science and engineering at NYU and director of the university’s Center for Responsible AI, wants to make the terms “AI” and “responsible AI” synonymous.

MLCommons’ David Kanter, NVIDIA’s Daniel Galvez on Publicly Accessible Datasets – Ep. 167

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with David Kanter, founder and executive director of MLCommons, and NVIDIA senior AI developer technology engineer Daniel Galvez, about the democratization of access to speech technology and how ML Commons is helping advance the research and development of machine learning for everyone.

Metaspectral’s Migel Tissera on AI-Based Data Management – Ep. 155

Cofounder and CTO of Metaspectral speaks with‌ ‌‌NVIDIA‌ ‌AI‌ ‌Podcast‌‌ ‌host‌ ‌Noah‌ ‌Kravitz‌ ‌about‌ ‌how‌ ‌Metaspectral’s‌ ‌technologies‌ ‌help‌ ‌space‌ ‌explorers‌ ‌make‌ ‌quicker‌ ‌and‌ ‌better‌ ‌use‌ ‌of‌ ‌the‌ ‌massive‌ ‌amounts‌ ‌of‌ ‌image‌ ‌data‌ ‌they‌ ‌collect‌ ‌out‌ ‌in‌ ‌the‌ ‌cosmos.‌ ‌

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

The Building Blocks of AI: Decoding the Role and Significance of Foundation Models

The Building Blocks of AI: Decoding the Role and Significance of Foundation Models

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users.

Skyscrapers start with strong foundations. The same goes for apps powered by AI.

A foundation model is an AI neural network trained on immense amounts of raw data, generally with unsupervised learning.

It’s a type of artificial intelligence model trained to understand and generate human-like language. Imagine giving a computer a huge library of books to read and learn from, so it can understand the context and meaning behind words and sentences, just like a human does.

Foundation models.

A foundation model’s deep knowledge base and ability to communicate in natural language make it useful for a broad range of applications, including text generation and summarization, copilot production and computer code analysis, image and video creation, and audio transcription and speech synthesis.

ChatGPT, one of the most notable generative AI applications, is a chatbot built with OpenAI’s GPT foundation model. Now in its fourth version, GPT-4 is a large multimodal model that can ingest text or images and generate text or image responses.

Online apps built on foundation models typically access the models from a data center. But many of these models, and the applications they power, can now run locally on PCs and workstations with NVIDIA GeForce and NVIDIA RTX GPUs.

Foundation Model Uses

Foundation models can perform a variety of functions, including:

  • Language processing: understanding and generating text
  • Code generation: analyzing and debugging computer code in many programming languages
  • Visual processing: analyzing and generating images
  • Speech: generating text to speech and transcribing speech to text

They can be used as is or with further refinement. Rather than training an entirely new AI model for each generative AI application — a costly and time-consuming endeavor — users commonly fine-tune foundation models for specialized use cases.

Pretrained foundation models are remarkably capable, thanks to prompts and data-retrieval techniques like retrieval-augmented generation, or RAG. Foundation models also excel at transfer learning, which means they can be trained to perform a second task related to their original purpose.

For example, a general-purpose large language model (LLM) designed to converse with humans can be further trained to act as a customer service chatbot capable of answering inquiries using a corporate knowledge base.

Enterprises across industries are fine-tuning foundation models to get the best performance from their AI applications.

Types of Foundation Models

More than 100 foundation models are in use — a number that continues to grow. LLMs and image generators are the two most popular types of foundation models. And many of them are free for anyone to try — on any hardware — in the NVIDIA API Catalog.

LLMs are models that understand natural language and can respond to queries. Google’s Gemma is one example; it excels at text comprehension, transformation and code generation. When asked about the astronomer Cornelius Gemma, it shared that his “contributions to celestial navigation and astronomy significantly impacted scientific progress.” It also provided information on his key achievements, legacy and other facts.

Extending the collaboration of the Gemma models, accelerated with the NVIDIA TensorRT-LLM on RTX GPUs, Google’s CodeGemma brings powerful yet lightweight coding capabilities to the community. CodeGemma models are available as 7B and 2B pretrained variants that specialize in code completion and code generation tasks.

MistralAI’s Mistral LLM can follow instructions, complete requests and generate creative text. In fact, it helped brainstorm the headline for this blog, including the requirement that it use a variation of the series’ name “AI Decoded,” and it assisted in writing the definition of a foundation model.

Hello, world, indeed.

Meta’s Llama 2 is a cutting-edge LLM that generates text and code in response to prompts.

Mistral and Llama 2 are available in the NVIDIA ChatRTX tech demo, running on RTX PCs and workstations. ChatRTX lets users personalize these foundation models by connecting them to personal content — such as documents, doctors’ notes and other data — through RAG. It’s accelerated by TensorRT-LLM for quick, contextually relevant answers. And because it runs locally, results are fast and secure.

Image generators like StabilityAI’s Stable Diffusion XL and SDXL Turbo let users generate images and stunning, realistic visuals. StabilityAI’s video generator, Stable Video Diffusion, uses a generative diffusion model to synthesize video sequences with a single image as a conditioning frame.

Multimodal foundation models can simultaneously process more than one type of data — such as text and images — to generate more sophisticated outputs.

A multimodal model that works with both text and images could let users upload an image and ask questions about it. These types of models are quickly working their way into real-world applications like customer service, where they can serve as faster, more user-friendly versions of traditional manuals.

Many foundation models are free to try — on any hardware — in the NVIDIA API Catalog.

Kosmos 2 is Microsoft’s groundbreaking multimodal model designed to understand and reason about visual elements in images.

Think Globally, Run AI Models Locally 

GeForce RTX and NVIDIA RTX GPUs can run foundation models locally.

The results are fast and secure. Rather than relying on cloud-based services, users can harness apps like ChatRTX to process sensitive data on their local PC without sharing the data with a third party or needing an internet connection.

Users can choose from a rapidly growing catalog of open foundation models to download and run on their own hardware. This lowers costs compared with using cloud-based apps and APIs, and it eliminates latency and network connectivity issues. Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

NVIDIA Joins $110 Million Partnership to Help Universities Teach AI Skills

NVIDIA Joins $110 Million Partnership to Help Universities Teach AI Skills

The Biden Administration has announced a new $110 million AI partnership between Japan and the United States that includes an initiative to fund research through a collaboration between the University of Washington and the University of Tsukuba.

NVIDIA is committing $25 million in a collaboration with Amazon that aims to bring the latest technologies to the University of Washington, in Seattle, and the University of Tsukuba, which is northeast of Tokyo.

Universities around the world are preparing students for crucial AI skills by providing access to the high performance computing capabilities of supercomputing.

“This collaboration between the University of Washington, University of Tsukuba, Amazon, and NVIDIA will help provide the research and workforce training for our regions’ tech sectors to keep up with the profound impacts AI is having across every sector of our economy,” said Jay Inslee, governor of Washington State.

Creating AI Opportunities for Students

NVIDIA has been investing in universities for decades computing resources, advanced training curriculums, donations, and other support to provide students and professors with access to high performance computing (HPC) for groundbreaking research results.

NVIDIA founder and CEO Jensen Huang and his wife, Lori Huang, donated $50 million to their alma mater Oregon State University — where they met and earned engineering degrees —  to help build one of the world’s fastest supercomputers in a facility bearing their names. This computing center will help students research, develop and apply AI across Oregon State’s top-ranked programs in agriculture, computer sciences, climate science, forestry, oceanography, robotics, water resources, materials sciences and more.

The University of Florida recently unveiled Malachowsky Hall, which was made possible with a $50 million donation from NVIDIA co-founder Chris Malachowsky. This new building along with a previous donation of an AI supercomputer is enabling the University of Florida to offer world class AI training and research opportunities.

Strengthening US-Japan AI Research Collaboration

The U.S.-Japan HPC alliance will advance AI research and development and support the two nations’ global leadership in cutting-edge technology.

The University of Washington and Tsukuba University initiative will support research in critical areas where AI can drive impactful change, such as robotics, healthcare, climate change and atmospheric science, among others.

In addition to the university partnership,  NVIDIA recently announced a collaboration with Japan’s National Institute of Advanced Industrial Science and Technology (AIST) on AI and quantum technology.

Addressing Worldwide AI Talent Shortage

Demand for key AI skills is creating a talent shortage worldwide. Some experts calculate there has been a fivefold increase in demand for these skills as a percentage of total U.S. jobs. Universities around the world are looking for ways to prepare students with new skills for the workforce, and corporate-university partnerships are a key tool to help bridge the gap.

NVIDIA unveiled at GTC 2024 new professional certifications in generative AI to help enable the next generation of developers to obtain technical credibility in this important domain.

Learn more about NVIDIA generative AI courses here and here.

Read More

Broadcasting Breakthroughs: NVIDIA Holoscan for Media, Available Now, Transforms Live Media With Easy AI Integration

Broadcasting Breakthroughs: NVIDIA Holoscan for Media, Available Now, Transforms Live Media With Easy AI Integration

Whether delivering live sports programming, streaming services, network broadcasts or content on social platforms, media companies face a daunting landscape.

Viewers are increasingly opting for interactive and personalized content. Virtual reality and augmented reality continue their drive into the mainstream. New video compression standards are challenging traditional computing infrastructure. And AI is having an impact across the board.

In a situation this dynamic, media companies will benefit most from AI-enabled media solutions that flexibly align with their changing development and delivery needs.

NVIDIA Holoscan for Media, available now, is a software-defined platform that enables developers to easily build live media applications, supercharge them with AI and then deploy them across media platforms.

A New Approach to Media Application Development 

Holoscan for Media offers a new approach to development in live media. It simplifies application development by providing an internet protocol (IP)-based, cloud-native architecture that isn’t constrained by dedicated hardware, environments or locations. Instead, it integrates open-source and ubiquitous technologies and streamlines application delivery to customers, all while optimizing costs.

Traditional application development for the live media market relies on dedicated hardware. Because software is tied to that hardware, developers are constrained when it comes to innovating or upgrading applications.

Each deployment type, whether on premises or in the cloud, requires its own build, making development costly and inefficient. Beyond designing an application’s user interface and core functionalities, developers have to build out additional infrastructure services, further eating into research and development budgets.

The most significant challenge is incorporating AI, due to the complexity of building an AI software stack. This prevents many applications in pilot programs from moving to production.

Holoscan for Media eases the integration of AI into application development due to its underlying architecture, which enables software-defined video to be deployed on the same software stack as AI applications, including generative AI-based tools. This benefits vendors and research and development departments looking to incorporate AI apps into live video.

Since the platform is cloud-native, the same architecture can run independent of location, whether in the cloud, on premises or at the edge. Additionally, it’s not tied to a specific device, field-programmable gate array or appliance.

The Holoscan for Media architecture includes services like authentication, logging and security, as well as features that help broadcasters migrate to IP-based technologies, including the SMPTE ST 2110 transport protocol, the precision time protocol for timing and synchronization, and the NMOS controller and registry for dynamic device management.

A Growing Ecosystem of Partners

Beamr, Comprimato, Lawo, Media.Monks, Pebble, RED Digital Cinema, Sony Corporation and Telestream are among the early adopters already transforming live media with Holoscan for Media.

“We use Holoscan for Media as the core infrastructure for our broadcast and media workflow, granting us powerful scale to deliver interest-based content across a wide range of channels and platforms,” said Lewis Smithingham, senior vice president of innovation special operations at Media.Monks, a provider of software-defined production workflows.

“By compartmentalizing applications and making them interoperable, Holoscan for Media allows for easy adoption of new innovations from many different companies in one platform,” said Jeff Goodman, vice president of product management at RED Digital Cinema, a manufacturer of professional digital cinema cameras. “It takes much of the integration complexity out of the equation and will significantly increase the pace of innovation. We are very excited to be a part of it.”

“We believe NVIDIA Holoscan for Media is one of the paths forward to enabling the development of next-generation products and services for the industry, allowing the scaling of GPU power as needed,” said Masakazu Murata, senior general manager of media solutions business at Sony Corporation. “Our M2L-X software switcher prototype running on Holoscan for Media demonstrates how customers can run Sony’s solutions on GPU clusters.”

“Telestream is committed to transforming the media landscape, enhancing efficiency and content experiences without sacrificing quality or user-friendliness,” said Charlie Dunn, senior vice president and general manager at Telestream, a provider of digital media software and solutions. “We’ve seamlessly integrated the Holoscan for Media platform into our INSPECT IP video monitoring solution to achieve a clear and efficient avenue for ST 2110 compliance.”

Experience Holoscan for Media at NAB Show

These partners will showcase how they’re using NVIDIA Holoscan for Media at NAB Show, an event for the broadcast, media and entertainment industry, taking place April 13-17 in Las Vegas.

Explore development on Holoscan for Media and discover applications running on the platform at the Dell Technologies booth. Learn more about NVIDIA’s presence at NAB Show, including details on sessions and demos on generative AI, software-defined broadcast and immersive graphics.

Apply for access to NVIDIA Holoscan for Media today.

Read More

Start Up Your Engines: NVIDIA and Google Cloud Collaborate to Accelerate AI Development

Start Up Your Engines: NVIDIA and Google Cloud Collaborate to Accelerate AI Development

NVIDIA and Google Cloud have announced a new collaboration to help startups around the world accelerate the creation of generative AI applications and services.

The announcement, made today at Google Cloud Next ‘24 in Las Vegas, brings together the NVIDIA Inception program for startups and the Google for Startups Cloud Program to widen access to cloud credits, go-to-market support and technical expertise to help startups deliver value to customers faster.

Qualified members of NVIDIA Inception, a global program supporting more than 18,000 startups, will have an accelerated path to using Google Cloud infrastructure with access to Google Cloud credits, offering up to $350,000 for those focused on AI.

Google for Startups Cloud Program members can join NVIDIA Inception and gain access to technological expertise, NVIDIA Deep Learning Institute course credits, NVIDIA hardware and software, and more. Eligible members of the Google for Startups Cloud Program also can participate in NVIDIA Inception Capital Connect, a platform that gives startups exposure to venture capital firms interested in the space.

High-growth emerging software makers of both programs can also gain fast-tracked onboarding to Google Cloud Marketplace, co-marketing and product acceleration support.

This collaboration is the latest in a series of announcements the two companies have made to help ease the costs and barriers associated with developing generative AI applications for enterprises of all sizes. Startups in particular are constrained by the high costs associated with AI investments.

It Takes a Full-Stack AI Platform

In February, Google DeepMind unveiled Gemma, a family of state-of-the-art open models. NVIDIA, in collaboration with Google, recently launched optimizations across all NVIDIA AI platforms for Gemma, helping to reduce customer costs and speed up innovative work for domain-specific use cases.

Teams from the companies worked closely together to accelerate the performance of Gemma — built from the same research and technology used to create Google DeepMind’s most capable model yet, Gemini — with NVIDIA TensorRT-LLM, an open-source library for optimizing large language model inference, when running on NVIDIA GPUs.

NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, together with Google Kubernetes Engine (GKE) provide a streamlined path for developing AI-powered apps and deploying optimized AI models into production. Built on inference engines including NVIDIA Triton Inference Server and TensorRT-LLM, NIM supports a wide range of leading AI models and delivers seamless, scalable AI inferencing to accelerate generative AI deployment in enterprises.

The Gemma family of models, including Gemma 7B, RecurrentGemma and CodeGemma, are available from the NVIDIA API catalog for users to try from a browser, prototype with the API endpoints and self-host with NIM.

Google Cloud has made it easier to deploy the NVIDIA NeMo framework across its platform via GKE and Google Cloud HPC Toolkit. This enables developers to automate and scale the training and serving of generative AI models, allowing them to rapidly deploy turnkey environments through customizable blueprints that jump-start the development process.

NVIDIA NeMo, part of NVIDIA AI Enterprise, is also available in Google Cloud Marketplace, providing customers another way to easily access NeMo and other frameworks to accelerate AI development.

Further widening the availability of NVIDIA-accelerated generative AI computing, Google Cloud also announced the general availability of A3 Mega will be coming next month. The instances are an expansion to its A3 virtual machine family, powered by NVIDIA H100 Tensor Core GPUs. The new instances will double the GPU-to-GPU network bandwidth from A3 VMs.

Google Cloud’s new Confidential VMs on A3 will also include support for confidential computing to help customers protect the confidentiality and integrity of their sensitive data and secure applications and AI workloads during training and inference — with no code changes while accessing H100 GPU acceleration. These GPU-powered Confidential VMs will be available in Preview this year.

Next Up: NVIDIA Blackwell-Based GPUs

NVIDIA’s newest GPUs based on the NVIDIA Blackwell platform will be coming to Google Cloud early next year in two variations: the NVIDIA HGX B200 and the NVIDIA GB200 NVL72.

The HGX B200 is designed for the most demanding AI, data analytics and high performance computing workloads, while the GB200 NVL72 is designed for next-frontier, massive-scale, trillion-parameter model training and real-time inferencing.

The NVIDIA GB200 NVL72 connects 36 Grace Blackwell Superchips, each with two NVIDIA Blackwell GPUs combined with an NVIDIA Grace CPU over a 900GB/s chip-to-chip interconnect, supporting up to 72 Blackwell GPUs in one NVIDIA NVLink domain and 130TB/s of bandwidth. It overcomes communication bottlenecks and acts as a single GPU, delivering 30x faster real-time LLM inference and 4x faster training compared to the prior generation.

NVIDIA GB200 NVL72 is a multi-node rack-scale system that will be combined with Google Cloud’s fourth generation of advanced liquid-cooling systems.

NVIDIA announced last month that NVIDIA DGX Cloud, an AI platform for enterprise developers that’s optimized for the demands of generative AI, is generally available on A3 VMs powered by H100 GPUs. DGX Cloud with GB200 NVL72 will also be available on Google Cloud in 2025.

Read More

NVIDIA Ranked by Fortune at No. 3 on ‘100 Best Companies to Work For’ List

NVIDIA Ranked by Fortune at No. 3 on ‘100 Best Companies to Work For’ List

NVIDIA jumped to No. 3 on the latest list of America’s 100 Best Companies to Work For by Fortune magazine and Great Place to Work.

It’s the company’s eighth consecutive year and highest ranking yet on the widely followed list, published today, which more than a thousand businesses vie to land on. NVIDIA ranked sixth last year.

“Since the COVID pandemic, employees are increasingly prioritizing work-life balance, mission alignment and empathetic workplaces,” Fortune wrote, with hotelier Hilton taking the top spot, followed by Cisco.

“Even as the broader tech sector has shed tens of thousands of jobs, NVIDIA continued its remarkable streak of nearly 15 years without any layoffs,” Fortune noted in its writeup. NVIDIA also was cited for the company’s “flat structure,” which encourages employees to solve problems quickly and collaboratively through projects.

Survey Says: 97% Are Proud to Share They Work at NVIDIA

To identify the top 100, Fortune conducted a detailed employee survey with Great Place to Work that received more than 1.3 million responses from people in the U.S. The survey showed that 97% of NVIDIANs are proud to tell others where they work.

While many tech companies faced a challenging 2023, with an uncertain economy and several of the biggest employers laying off thousands of workers, NVIDIA focused on managing costs, encouraging innovation and offering unique benefits and compensation that supported employees.

Learn more about NVIDIA life, culture and careers.

Read More

‘The Elder Scrolls Online’ Joins GeForce NOW for Game’s 10th Anniversary

‘The Elder Scrolls Online’ Joins GeForce NOW for Game’s 10th Anniversary

Rain or shine, a new month means new games. GeForce NOW kicks off April with nearly 20 new games, seven of which are available to play this week.

GFN Thursday celebrates the 10-year anniversary of ZeniMax Online Studios’ Elder Scrolls Online by bringing the award-winning online role-playing game (RPG) to the cloud this week.

Plus, the GeForce NOW Ultimate membership comes to gamers in Japan for the first time, with new GeForce RTX 4080 SuperPODs online today.

The Rising Sun Goes Ultimate

Japan and GeForce NOW
Get ready to drift into the cloud.

GeForce NOW is rolling out the green carpet to gamers in Japan, expanding next-generation cloud gaming worldwide. The Ultimate membership tier is now available to gamers in the region, delivering up to 4K gaming at up to 120 frames per second, all at ultra-low latency — even on devices without the latest hardware.

Gamers in Japan can now access from the cloud triple-A titles by some of the world’s largest publishers. Capcom’s Street Fighter 6 and Resident Evil Village will be coming to GeForce NOW at a later date for members to stream at the highest performance.

GeForce NOW will operate in Japan alongside GeForce NOW Alliance partner and telecommunications company KDDI, which currently offers its customers access to GeForce RTX 3080-powered servers, in addition to its mobile benefits. Plus, new GFNA partners in other regions will be announced this year — stay tuned to GFN Thursdays for details.

A Decade of Adventure

Elder Scrolls Online on GeForce NOW
The cloud is slay-ing.

Discover Tamriel from the comfort of almost anywhere with GeForce NOW. Explore the Elder Scrolls universe solo or alongside thousands of other players in The Elder Scrolls Online as it joins the cloud this week for members.

For a decade, Elder Scrolls Online has cultivated a vibrant community of millions of players and a legacy of exciting stories, characters and adventures. Players have explored Morrowind, Summerset, Skyrim and more, thanks to regular updates and chapter releases. The title’s anniversary celebrations kick off in Amsterdam this week, and fans worldwide can join in by streaming the game from the cloud.

Set during Tamriel’s Second Era, a millennium before The Elder Scrolls V: Skyrim, The Elder Scrolls Online has players exploring a massive, ever-growing world. Together they can encounter memorable quests, challenging dungeons, player vs. player battles and more. Gamers can play their way by customizing their characters, looting and crafting new gear, and unlocking and developing their abilities.

Experience the epic RPG with an Ultimate membership and venture forth in the cloud with friends, tapping eight-hour gaming sessions and exclusive access to servers. Ultimate members can effortlessly explore the awe-inspiring fantasy world with the ability to stream at up to 4K and 120 fps, or experience the game at ultrawide resolutions on supported devices.

April Showers Bring New Games

MEGAMAN X DiVE OFFLINE on GeForce NOW
X marks the cloud.

Dive into a new adventure with Mega Man X DiVE Offline from Capcom. It’s the offline, reimagined version of Mega Man X, featuring the franchise’s classic action, over 100 characters from the original series and an all-new story with hundreds of stages to play. Strengthen characters and weapons with a variety of power-ups — then test them out in the side-scrolling action.

Catch it alongside other new games joining the cloud this week:

  • ARK: Survival Ascended (New release on Xbox, available on PC Game Pass, April 1)
  • Thief (New release on Epic Games Store, free from April 4-11)
  • Sons of Valhalla (New release on Steam, April 5)
  • Elder Scrolls Online (Steam and Epic Games Store)
  • MEGA MAN X DiVE Offline (Steam)
  • SUPERHOT: MIND CONTROL DELETE (Xbox, available on PC Game Pass)
  • Turbo Golf Racing 1.0 (Xbox, available on PC Game Pass)

And members can look for the following throughout the rest of the month:

  • Dead Island 2 (New release on Steam, April 22)
  • Phantom Fury (New release on Steam, April 23)
  • Oddsparks: An Automation Adventure (New release on Steam, April 24)
  • 9-Bit Armies: A Bit Too Far (Steam)
  • Backpack Battles (Steam)
  • Dragon’s Dogma 2 Character Creator & Storage (Steam)
  • Evil West (Xbox, available on PC Game Pass)
  • Islands of Insight (Steam)
  • Lightyear Frontier (Steam and Xbox, available on PC Game Pass)
  • Manor Lords (New release on Steam and Xbox, available on PC Game Pass)
  • Metaball (Steam)
  • Tortuga – A Pirate’s Tale (Steam)

Making the Most of March

In addition to the 30 games announced last month, six more joined the GeForce NOW library:

  • Zoria: Age of Shattering (New release on Steam, March 7)
  • Deus Ex: Mankind Divided (New release on Epic Games Store, free, March 14)
  • Dragon’s Dogma 2 (New release on Steam, March 21)
  • Diablo IV (Xbox, available on PC Game Pass)
  • Granblue Fantasy: Relink (Steam)
  • Space Engineers (Xbox, available on PC Game Pass)

Some titles didn’t make it in March. Crown Wars: The Black Prince and Breachway have delayed their launch dates to later this year, and Portal: Revolution will join GeForce NOW in the future. Stay tuned to GFN Thursday for updates.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

A New Lens: Dotlumen CEO Cornel Amariei on Assistive Technology for the Visually Impaired

A New Lens: Dotlumen CEO Cornel Amariei on Assistive Technology for the Visually Impaired

Dotlumen is illuminating a new technology to help people with visual impairments navigate the world.

In this episode of NVIDIA’s AI Podcast, recorded live at the NVIDIA GTC global AI conference, host Noah Kravitz spoke with the Romanian startup’s founder and CEO, Cornel Amariei, about developing its flagship Dotlumen Glasses.

Equipped with sensors and powered by AI, the glasses compute a safely walkable path for visually impaired individuals and offer haptic — or tactile — feedback on how to proceed via corresponding vibrations. Amariei further discusses the process and challenges of developing assistive technology and its potential for enhancing accessibility.

Dotlumen is a member of the NVIDIA Inception program for cutting-edge startups.

Stay tuned for more episodes recorded live from GTC.

Time Stamps

0:52: Background on the glasses

4:28: User experience of the glasses

7:29: How the glasses sense the physical world and compute a walkable path

18:07: The hardest part of the development process

22:20: Expected release and availability of the glasses

25:57: Dotlumen’s technical breakthrough moments

30:19: Other assistive technologies to look out for

You Might Also Like…

Personalized Health: Viome’s Guru Banavar Discusses Startup’s AI-Driven Approach – Ep. 216

Viome Chief Technology Officer Guru Banavar discusses how the startup’s innovations in AI and genomics advance personalized health and wellness.

Cardiac Clarity: Dr. Keith Channon Talks Revolutionizing Heart Health With AI – Ep. 212

Here’s some news to still beating hearts: AI is helping bring some clarity to cardiology. Caristo Diagnostics has developed an AI-powered solution for detecting coronary inflammation in cardiac CT scans.

Matice Founder Jessica Whited on Harnessing Regenerative Species for Medical Breakthroughs – Ep. 198

Scientists at Matice Biosciences are using AI to study the regeneration of tissues in animals known as super-regenerators, such as salamanders and planarians. The goal of the research is to develop new treatments that will help humans heal from injuries without scarring.

How GluxKind Created Ella, the AI-Powered Smart Stroller – Ep. 193

Imagine a stroller that can drive itself, help users up hills, brake on slopes and provide alerts of potential hazards. That’s what GlüxKind has done with Ella, an award-winning smart stroller that uses the NVIDIA Jetson edge AI and robotics platform to power its AI features.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

Coming Up ACEs: Decoding the AI Technology That’s Enhancing Games With Realistic Digital Humans

Coming Up ACEs: Decoding the AI Technology That’s Enhancing Games With Realistic Digital Humans

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and which showcases new hardware, software, tools and accelerations for RTX PC users.

Digital characters are leveling up.

Non-playable characters often play a crucial role in video game storytelling, but since they’re usually designed with a fixed purpose, they can get repetitive and boring — especially in vast worlds where there are thousands.

Thanks in part to incredible advances in visual computing like ray tracing and DLSS, video games are more immersive and realistic than ever, making dry encounters with NPCs especially jarring.

Earlier this year, production microservices for the NVIDIA Avatar Cloud Engine launched, giving game developers and digital creators an ace up their sleeve when it comes to making lifelike NPCs. ACE microservices allow developers to integrate state-of-the-art generative AI models into digital avatars in games and applications. With ACE microservices, NPCs can dynamically interact and converse with players in-game and in real time.

Leading game developers, studios and startups are already incorporating ACE into their titles, bringing new levels of personality and engagement to NPCs and digital humans.

Bring Avatars to Life With NVIDIA ACE

The process of creating NPCs starts with providing them a backstory and purpose, which helps guide the narrative and ensures contextually relevant dialogue. Then, ACE subcomponents work together to build avatar interactivity and enhance responsiveness.

NPCs tap up to four AI models to hear, process, generate dialogue and respond.

The player’s voice first goes into NVIDIA Riva, a technology that builds fully customizable, real-time conversational AI pipelines and turns chatbots into engaging and expressive assistants using GPU-accelerated multilingual speech and translation microservices.

With ACE, Riva’s automatic speech recognition (ASR) feature processes what was said and uses AI to deliver a highly accurate transcription in real time. Explore a Riva-powered demo of speech-to-text in a dozen languages.

The transcription then goes into an LLM — such as Google’s Gemma, Meta’s Llama 2 or Mistral — and taps Riva’s neural machine translation to generate a natural language text response. Next, Riva’s Text-to-Speech functionality generates an audio response.

Finally, NVIDIA Audio2Face (A2F) generates facial expressions that can be synced to dialogue in many languages. With the microservice, digital avatars can display dynamic, realistic emotions streamed live or baked in during post-processing.

The AI network automatically animates face, eyes, mouth, tongue and head motions to match the selected emotional range and level of intensity. And A2F can automatically infer emotion directly from an audio clip.

Each step happens in real time to ensure fluid dialogue between the player and the character. And the tools are customizable, giving developers the flexibility to build the types of characters they need for immersive storytelling or worldbuilding.

Born to Roll

At GDC and GTC, developers and platform partners showcased demos leveraging NVIDIA ACE microservices — from interactive NPCs in gaming to powerful digital human nurses.

Ubisoft is exploring new types of interactive gameplay with dynamic NPCs. NEO NPCs, the product of its latest research and development project, are designed to interact in real time with players, their environment and other characters, opening up new possibilities for dynamic and emergent storytelling.

The capabilities of these NEO NPCs were showcased through demos, each focused on different aspects of NPC behaviors, including environmental and contextual awareness; real-time reactions and animations; and conversation memory, collaboration and strategic decision-making. Combined, the demos spotlighted the technology’s potential to push the boundaries of game design and immersion.

Using Inworld AI technology, Ubisoft’s narrative team created two NEO NPCs, Bloom and Iron, each with their own background story, knowledge base and unique conversational style. Inworld technology also provided the NEO NPCs with intrinsic knowledge of their surroundings, as well as interactive responses powered by Inworld’s LLM. NVIDIA A2F provided facial animations and lip syncing for the two NPCs real time.

Inworld and NVIDIA set GDC abuzz with a new technology demo called Covert Protocol, which showcased NVIDIA ACE technologies and the Inworld Engine. In the demo, players controlled a private detective who completed objectives based on the outcome of conversations with NPCs on the scene. Covert Protocol unlocked social simulation game mechanics with AI-powered digital characters that acted as bearers of crucial information, presented challenges and catalyzed key narrative developments. This enhanced level of AI-driven interactivity and player agency is set to open up new possibilities for emergent, player-specific gameplay.

Built on Unreal Engine 5, Covert Protocol uses the Inworld Engine and NVIDIA ACE, including NVIDIA Riva ASR and A2F, to augment Inworld’s speech and animation pipelines.

In the latest version of the NVIDIA Kairos tech demo built in collaboration with Convai, which was shown at CES, Riva ASR and A2F were used to significantly improve NPC interactivity. Convai’s new framework allowed the NPCs to converse among themselves and gave them awareness of objects, enabling them to pick up and deliver items to desired areas. Furthermore, NPCs gained the ability to lead players to objectives and traverse worlds.

Digital Characters in the Real World

The technology used to create NPCs is also being used to animate avatars and digital humans. Going beyond gaming, task-specific generative AI is moving into healthcare, customer service and more.

NVIDIA collaborated with Hippocratic AI at GTC to extend its healthcare agent solution, showcasing the potential of a generative AI healthcare agent avatar. More work underway to develop a super-low-latency inference platform to power real-time use cases.

“Our digital assistants provide helpful, timely and accurate information to patients worldwide,” said Munjal Shah, cofounder and CEO of Hippocratic AI. “NVIDIA ACE technologies bring them to life with cutting-edge visuals and realistic animations that help better connect to patients.”

Internal testing of Hippocratic’s initial AI healthcare agents is focused on chronic care management, wellness coaching, health risk assessments, social determinants of health surveys, pre-operative outreach and post-discharge follow-up.

UneeQ is an autonomous digital human platform focused on AI-powered avatars for customer service and interactive applications. UneeQ integrated the NVIDIA A2F microservice into its platform and combined it with its Synanim ML synthetic animation technology to create highly realistic avatars for enhanced customer experiences and engagement.

“UneeQ combines NVIDIA animation AI with our own Synanim ML synthetic animation technology to deliver real-time digital human interactions that are emotionally responsive and deliver dynamic experiences powered by conversational AI,” said Danny Tomsett, founder and CEO at UneeQ.

AI in Gaming

ACE is one of the many NVIDIA AI technologies that bring games to the next level.

  • NVIDIA DLSS is a breakthrough graphics technology that uses AI to increase frame rates and improve image quality on GeForce RTX GPUs.
  • NVIDIA RTX Remix enables modders to easily capture game assets, automatically enhance materials with generative AI tools and quickly create stunning RTX remasters with full ray tracing and DLSS.
  • NVIDIA Freestyle, accessed through the new NVIDIA app beta, lets users personalize the visual aesthetics of more than 1,200 games through real-time post-processing filters, with features like RTX HDR, RTX Dynamic Vibrance and more.
  • The NVIDIA Broadcast app transforms any room into a home studio, giving livestream AI-enhanced voice and video tools, including noise and echo removal, virtual background and AI green screen, auto-frame, video noise removal and eye contact.

Experience the latest and greatest in AI-powered experiences with NVIDIA RTX PCs and workstations, and make sense of what’s new, and what’s next, with AI Decoded.

Get weekly updates directly in your inbox by subscribing to the AI Decoded newsletter.

Read More