‘Create a Data Flywheel With AI,’ NVIDIA CEO Jensen Huang Tells Attendees at Snowflake Summit

‘Create a Data Flywheel With AI,’ NVIDIA CEO Jensen Huang Tells Attendees at Snowflake Summit

AI gives every company an opportunity to turn its processes into a data flywheel, NVIDIA founder and CEO Jensen Huang told thousands of attendees Monday at the Snowflake Data Cloud Summit.

Companies need to “take all the most important processes they do, capture them in a data flywheel and turn that into the company’s AI to drive that flywheel even further,” said Huang, joining from Taipei a virtual fireside chat with Snowflake’s CEO Sridhar Ramaswamy in San Francisco.

The two executives described how the combination of the Snowflake AI Data Cloud and NVIDIA AI will simplify and accelerate enterprise AI.

“You want to jump on this train as fast as you can, don’t let it fly by because you can use it to transform your business or go into new businesses,” said Huang, the day after he gave a keynote kicking off COMPUTEX in Taiwan.

Snowflake Users Can Tap Into NVIDIA AI Enterprise

For example, businesses will be able to deploy Snowflake Arctic, an enterprise-focused large language model (LLM), in seconds using NVIDIA NIM inference microservices, part of the NVIDIA AI Enterprise software platform. 

Arctic was trained on NVIDIA H100 Tensor Core GPUs and is available on the NVIDIA API catalog, fully supported by NVIDIA TensorRT-LLM, software that accelerates generative AI inference.

The two companies also will integrate Snowflake Cortex AI and NVIDIA NeMo Retriever, so businesses can link their AI-powered applications to information sources, ensuring highly accurate results with retrieval-augmented generation (RAG).

Ramaswamy gave examples of generative AI applications developed with the NVIDIA NeMo framework and Snowpark Container Services that will be available on Snowflake Marketplace for use by thousands of Snowflake’s customers.

“NVIDIA’s industry-leading accelerated computing is game changing for our customers and our own research team that used it to create the state-of-the-art Artic model for our customers,” said Ramaswamy.

To learn more, watch NVIDIA GTC on-demand sessions presented by Snowflake on how to build chatbots with a RAG architecture and how to leverage LLMs for life sciences.

 

Read More

NVIDIA Grace Hopper Superchip Accelerates Murex MX.3 Analytics Performance, Reduces Power Consumption

NVIDIA Grace Hopper Superchip Accelerates Murex MX.3 Analytics Performance, Reduces Power Consumption

After the 2008 financial crisis and increased risk-management regulations that followed, Pierre Spatz anticipated banks would focus on reducing computing expenses.

As head of quantitative research at Murex, a trading and risk management software company based in Paris, Spatz adopted NVIDIA’s CUDA and GPU-accelerated computing, aiming for top performance and energy efficiency.

Always seeking the latest technologies, the company’s quants team has begun trials of the NVIDIA Grace Hopper Superchip. The effort is focused on helping customers better price and manage credit and market risk exposures of derivatives contracts.

More than 60,000 daily users in 65 countries rely on the Murex MX.3 platform. MX.3 assists banks, asset managers, pension funds and other financial institutions with their trading, risk and operations across asset classes.

Managing Risk With MX.3 Driven by Grace Hopper

Financial institutions need high-performance computing infrastructure to run risk models on vast amounts of data for pricing and risk calculations, and to deliver real-time decision-making capabilities.

MX.3 coverage includes both credit and market risk, fundamental review of the trading book and x-valuation adjustment (XVA). XVA is used for different types of valuation adjustments related to derivative contracts, such as the credit value adjustment (CVA), the margin value adjustment and the funding valuation adjustment.

Murex is testing Grace Hopper on the MX.3 platform for XVA calculations, as well as for market risk calibration, pricing evaluation, sensitivity, and profit and loss calculations on various asset classes.

Grace Hopper offers faster calculation as well as power savings to the Murex platform.

“On counterparty credit risk workloads such as CVA, Grace Hopper is the perfect fit, leveraging a heterogeneous architecture with a unique mix of CPU and GPU computations,” Spatz said. “On risk calculations, Grace is not only the fastest processor, but also far more power-efficient, making green IT a reality in the trading world.”

When running XVA workloads in MX.3, the Murex research and development lab has noticed Grace Hopper can offer a 4x reduction in energy consumption and a 7x performance improvement compared with CPU-based systems.

Pricing FX Barrier Options in MX.3 With Grace Hopper 

To price foreign exchange (FX) barrier options, Murex has used its flagship and latest stochastic local volatility model and also noticed impressive performance improvements when running on Grace Hopper. A barrier option is a derivative with a payoff that relies on whether its underlying asset price reaches or crosses a specified threshold during the span of the option contract.

The pricing evaluation is done with a 2D partial differential equation, which is more cost-effective on the Arm-based NVIDIA Grace CPU in GH200. Pricing this derivative with MX.3 on Grace Hopper goes 2.3x faster compared with Intel Xeon Gold 6148.

The NVIDIA Grace CPU also offers significant power efficiencies for FX barrier calculations on a watts-per-server basis, and it’s 5x better.

NVIDIA’s next-generation accelerated computing platform is driving energy efficiency and cost-saving for high-performance computing for quantitative analytics in capital markets, says Murex, pointing to the results above.

Learn about NVIDIA AI solutions for financial services.

 

Read More

Elevate Your Expertise: NVIDIA Introduces AI Infrastructure and Operations Training and Certification

Elevate Your Expertise: NVIDIA Introduces AI Infrastructure and Operations Training and Certification

NVIDIA has introduced a self-paced course, called AI Infrastructure and Operations Fundamentals, to provide enterprise professionals with essential training on the infrastructure and operational aspects of AI and accelerated computing. 

From enhancing speech recognition systems to powering self-driving cars, AI is transforming everyday life. The new course explains how to deploy and manage scalable infrastructure to support AI-based solutions, helping IT pros realize AI’s potential and stay competitive in the rapidly changing technological landscape. 

Course Overview  

The course is ideal for anyone seeking to expand their knowledge of AI and its applications. It was created and taught by NVIDIA experts with real-world experience and deep technical domain expertise.

The course is divided into three modules. The first, Introduction to AI, covers foundational AI concepts and principles. Learners will:   

  • Discover how AI is being applied in various sectors, to drive innovation and efficiency  
  • Trace the progression of AI from basic machine learning to advanced deep learning to generative AI — and learn how each phase unlocked new capabilities  
  • Explore how GPUs revolutionized AI, providing the computational power necessary for complex AI tasks  
  • Understand the importance of a robust software stack in ensuring optimal performance and efficiency  
  • Delve into the environments where AI workloads operate, whether on premises or in the cloud  

AI Infrastructure, the second module, dives into the critical infrastructure components that support AI operations. Learners will:  

  • Gain knowledge about the hardware that powers AI, including the latest advancements in compute platforms, networking and storage   
  • Explore practices that reduce data center carbon footprints and energy usage 
  • Discover how reference architectures can serve as a foundation for building the most effective AI designs  
  • Evaluate the benefits of transitioning from on-premises data centers to cloud-based solutions  

AI Operations, the final module, focuses on the practical aspect of managing AI infrastructure. Learners will:  

  • Gain insights into the tools and techniques that enable effective infrastructure management and monitoring  
  • Learn about orchestrating AI clusters and scheduling tasks to maximize performance and resource efficiency  

Certification: AI Infrastructure and Operations Associate  

Alongside the course, NVIDIA offers a new AI Infrastructure and Operations associate certification. This entry-level credential validates knowledge of the foundational concepts of adopting AI computing with NVIDIA solutions. Topics covered in this exam include: 

  • Accelerated computing use cases 
  • AI, machine learning and deep learning 
  • GPU architecture 
  • NVIDIA’s software suite 
  • Infrastructure and operation considerations for adopting NVIDIA solutions 

Whether attendees want to enhance existing skills, support projects, advance career paths, or embark on a new professional trajectory, this AI course and certification will further the knowledge and skills needed to excel in using AI. 

Learn more about this training and certification.  

Read More

GeForce NOW Brings the Heat With ‘World of Warcraft’

GeForce NOW Brings the Heat With ‘World of Warcraft’

World of Warcraft comes to the cloud this week, part of the 17 games joining the GeForce NOW library, with seven available to stream this week.

Plus, it’s time to get rewarded. Get a free in-game mount in Elder Scrolls Online starting today by opting into GeForce NOW’s Rewards program.

Heroes Rise to the Cloud

Dive into the immersive realms of World of Warcraft, including the latest expansion Dragonflight, the nostalgic journey of World of Warcraft Classic and the recently launched World of Warcraft Cataclysm Classic. These popular, massively multiplayer, online role-playing experiences from Blizzard Entertainment immerse players in legendary battles.

World of Warcraft: Dragonflight on GeForce NOW
Dragonriders fly best in the cloud.

Embark on a journey of endless adventure in the rich and dynamic universe of Azeroth in the latest modern expansion, World of Warcraft: Dragonflight. The expansive landscapes of the Dragon Isles are available to explore — even on the back of a fearsome dragon. The newly awakened Dracthyr Evokers are also available, World of Warcraft’s first-ever playable race-and-class combo. GeForce NOW Priority and Ultimate members can get immersed in the cinematic gameplay with support for RTX ON.

World of Warcraft Cataclysm Classic on GeForce NOW
Witness the return of Deathwing.

Face the return of Deathwing the Destroyer, whose violent emergence shatters and reshapes the continent of Azeroth. Journey into an era of fire and destruction in World of Warcraft Cataclysm Classic and usher in a new era for Azeroth. The updated game brings new dungeons and raids, fresh race and class combinations, and more.

World of Warcraft Classic on GeForce NOW
Azeroth awaits.

Whether a seasoned adventurer or a newcomer to the game, head to the Azeroth of yesteryear in World of Warcraft Classic and relive the experience of the game as it was upon its initial launch, with a few new upgrades. Explore the Eastern Kingdoms and Kalimdor, venture into iconic dungeons or engage in legendary player-vs-player battles.

Experience it all with a GeForce NOW membership, which means no waiting for downloads or games to update, even for the upcoming World of Warcraft expansion The War Within.

Mount Up

GeForce NOW members get access to rewards that enhance the gaming experience. This week The Elder Scrolls Online 10-year celebration continues with an in-game reward for GeForce NOW members.

New member reward on GeForce NOW
Manes flow freely in the cloud.

Mounts offer a great way to travel the world and provide a completely different experience to traveling on foot. This new free reward provides members with a trusty companion beyond the starter option. The mount has a sunny disposition, matching its vibrant, multihued coat. It’s an excellent horse for a new rider or one who regularly ventures into treacherous situations.

Members can claim their free mount starting today by opting into rewards and checking their email for instructions on how to redeem. Ultimate and Priority members can redeem starting today, while free members will be able to claim it starting May 31. It’s available until June 30, first come first served.

New Games, Assemble!

Capes on GeForce NOW
Turn-based strategy with a superhero twist.

Build a team of heroes and fight to take back the city in Capes, a turn-based strategy game from Daedlic Entertainment. Recruit, train and deploy a team to take back the city from the villains that hold it hostage. Level up heroes to gain access to new abilities and powerful upgrades — plus, each hero gains a unique team-up ability from each of their allies.

Check out the full list of new games this week:

  • The Rogue Prince of Persia (New release on Steam, May 27)
  • Capes (New release on Steam, May 29)
  • Lords of the Fallen (New release on Xbox, available on PC Game Pass, May 30)
  • Soulmask (New release on Steam, May 31)
  • Path of Exile (Steam)
  • World of Warcraft: Dragonflight (Battle.net)
  • World of Warcraft Classic (Battle.net)
  • World of Warcraft Cataclysm Classic (Battle.net)

And members can look for the following later this month:

  • Autopsy Simulator (New release on Steam, June 6)
  • Chornobyl Liquidators (New release on Steam, June 6)
  • SunnySide (New release on Steam, June 14)
  • Still Wakes the Deep (New release on Steam and Xbox, available on PC Game Pass, June 18)
  • Disney Speedstorm (Steam and Xbox, available on PC Game Pass)
  • Farm Together 2 (Steam)
  • Resident Evil Village (Steam)
  • Star Traders: Frontiers (Steam)
  • Street Fighter 6 (Steam)
  • Torque Drift 2 (Epic Games Store)

More to May

In addition to the 24 games announced last month, four more joined the GeForce NOW library:

  • Senua’s Saga: Hellblade II (New release on Steam and Xbox, available on PC Game Pass, May 21)
  • Serum (New release on Steam, May 23)
  • Palworld (Steam, and Xbox, available on PC Game Pass)
  • Tomb Raider: Definitive Edition (Xbox, available on PC Game Pass)

Gestalt, Norland and Sunnyside have delayed their launch dates to later this year. Stay tuned to GFN Thursday for updates.

From Tamriel to Teyvet, Night City to Sanctuary, GeForce NOW brings the world of PC gaming to nearly any device. Share your favorite gaming destinations all month long using #GreetingsFromGFN for a chance to be featured on the @NVIDIAGFN channels.

What are you planning to play this weekend? Let us know on X or in the comments below.

https://x.com/NVIDIAGFN/status/1795847572793274591

 

Read More

Riding the Wayve of AV 2.0, Driven by Generative AI

Riding the Wayve of AV 2.0, Driven by Generative AI

Generative AI is propelling AV 2.0, a new era in autonomous vehicle technology characterized by large, unified, end-to-end AI models capable of managing various aspects of the vehicle stack, including perception, planning and control.

London-based startup Wayve is pioneering this new era, developing autonomous driving technologies that can be built on NVIDIA DRIVE Orin and its successor NVIDIA DRIVE Thor, which uses the NVIDIA Blackwell GPU architecture designed for transformer, large language model (LLM) and generative AI workloads.

In contrast to AV 1.0’s focus on refining a vehicle’s perception capabilities using multiple deep neural networks, AV 2.0 calls for comprehensive in-vehicle intelligence to drive decision-making in dynamic, real-world environments.

Wayve, a member of the NVIDIA Inception program for cutting-edge startups, specializes in developing AI foundation models for autonomous driving, equipping vehicles with a “robot brain” that can learn from and interact with their surroundings.

“NVIDIA has been the oxygen of everything that allows us to train AI,” said Alex Kendall, cofounder and CEO of Wayve. “We train on NVIDIA GPUs, and the software ecosystem NVIDIA provides allows us to iterate quickly — this is what enables us to build billion-parameter models trained on petabytes of data.”

Generative AI also plays a key role in Wayve’s development process, enabling synthetic data generation so AV makers can use a model’s previous experiences to create and simulate novel driving scenarios.

The company is building embodied AI, a set of technologies that integrate advanced AI into vehicles and robots to transform how they respond to and learn from human behavior, enhancing safety.

Wayve recently announced its Series C investment round — with participation from NVIDIA — that will support the development and launch of the first embodied AI products for production vehicles. As Wayve’s core AI model advances, these products will enable manufacturers to efficiently upgrade cars to higher levels of driving automation, from L2+ assisted driving to L4 automated driving.

As part of its embodied AI development, Wayve launched GAIA-1, a generative AI model for autonomy that creates realistic driving videos using video, text and action inputs. It also launched LINGO-2, a driving model that links vision, language and action inputs to explain and determine driving behavior.

“One of the neat things about generative AI is that it allows you to combine different modes of data seamlessly,” Kendall said. “You can bring in the knowledge of all the texts, the general purpose reasoning and capabilities that we get from LLMs and apply that reasoning to driving — this is one of the more promising approaches that we know of to be able to get to true generalized autonomy and eventually L5 capabilities on the road.”

Read More

Decoding How NVIDIA RTX AI PCs and Workstations Tap the Cloud to Supercharge Generative AI

Decoding How NVIDIA RTX AI PCs and Workstations Tap the Cloud to Supercharge Generative AI

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and RTX workstation users.

Generative AI is enabling new capabilities for Windows applications and games. It’s powering unscripted, dynamic NPCs, it’s enabling creators to generate novel works of art, and it’s helping gamers boost frame rates by up to 4x. But this is just the beginning.

As the capabilities and use cases for generative AI continue to grow, so does the demand for compute to support it.

Hybrid AI combines the onboard AI acceleration of NVIDIA RTX with scalable, cloud-based GPUs to effectively and efficiently meet the demands of AI workloads.

Hybrid AI, a Love Story

With growing AI adoption, app developers are looking for deployment options: AI running locally on RTX GPUs delivers high performance and low latency, and is always available — even when not connected to the internet. On the other hand, AI running in the cloud can run larger models and scale across many GPUs, serving multiple clients simultaneously. In many cases, a single application will use both.

Hybrid AI is a kind of matchmaker that harmonizes local PC and workstation compute with cloud scalability. It provides the flexibility to optimize AI workloads based on specific use cases, cost and performance. It helps developers ensure that AI tasks run where it makes the most sense for their specific applications.

Whether the AI is running locally or in the cloud it gets accelerated by NVIDIA GPUs and NVIDIA’s AI stack, including TensorRT and TensorRT-LLM. That means less time staring at pinwheels of death and more opportunity to deliver cutting-edge, AI powered features to users.

A range of NVIDIA tools and technologies support hybrid AI workflows for creators, gamers, and developers.

Dream in the Cloud, Bring to Life on RTX

Generative AI has demonstrated its ability to help artists ideate, prototype and brainstorm new creations. One such solution, the cloud-based Generative AI by iStock — powered by NVIDIA Edify — is a generative photography service that was built for and with artists, training only on licensed content and with compensation for artist contributors.

Generative AI by iStock goes beyond image generation, providing artists with extensive tools to explore styles, variations, modify parts of an image or expand the canvas. With all these tools, artists can ideate numerous times and still bring ideas to life quickly.

Once the creative concept is ready, artists can bring it back to their local systems. RTX-powered PCs and workstations offer artists AI acceleration in more than 125 of the top creative apps to realize the full vision — whether it’s creating an amazing piece of artwork in Photoshop with local AI tools, animating the image with a parallax effect in DaVinci Resolve, or building a 3D scene with the reference image in Blender with ray tracing acceleration, and AI denoising in Optix.

Hybrid ACE Brings NPCs to Life

Hybrid AI is also enabling a new realm of interactive PC gaming with NVIDIA ACE, allowing game developers and digital creators to integrate state-of-the-art generative AI models into digital avatars on RTX AI PCs.

Powered by AI neural networks, NVIDIA ACE lets developers and designers create non-playable characters (NPCs) that can understand and respond to human player text and speech. It leverages AI models, including speech-to-text models to handle natural language spoken aloud, to generate NPCs’ responses in real time.

A Hybrid Developer Tool That Runs Anywhere

Hybrid also helps developers build and tune new AI models. NVIDIA AI Workbench helps developers quickly create, test and customize pretrained generative AI models and LLMs on RTX GPUs. It offers streamlined access to popular repositories like Hugging Face, GitHub and NVIDIA NGC, along with a simplified user interface that enables data scientists and developers to easily reproduce, collaborate on and migrate projects.

Projects can be easily scaled up when additional performance is needed — whether to the data center, a public cloud or NVIDIA DGX Cloud — and then brought back to local RTX systems on a PC or workstation for inference and light customization. Data scientists and developers can leverage pre-built Workbench projects to chat with documents using retrieval-augmented generation (RAG), customize LLMs using fine-tuning, accelerate data science workloads with seamless CPU-to-GPU transitions and more.

The Hybrid RAG Workbench project provides a customizable RAG application that developers can run and adapt themselves. They can embed their documents locally and run inference either on a local RTX system, a cloud endpoint hosted on NVIDIA’s API catalog or using NVIDIA NIM microservices. The project can be adapted to use various models, endpoints and containers, and provides the ability for developers to quantize models to run on their GPU of choice.

NVIDIA GPUs power remarkable AI solutions locally on NVIDIA GeForce RTX PCs and RTX workstations and in the cloud. Creators, gamers and developers can get the best of both worlds with growing hybrid AI workflows.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Tidy Tech: How Two Stanford Students Are Building Robots for Handling Household Chores

Tidy Tech: How Two Stanford Students Are Building Robots for Handling Household Chores

Imagine having a robot that could help you clean up after a party — or fold heaps of laundry. Chengshu Eric Li and Josiah David Wong, two Stanford University Ph.D. students advised by renowned American computer scientist Professor Fei-Fei Li, are making that a ‌dream come true. In this episode of the AI Podcast, host Noah Kravitz spoke with the two about their project, BEHAVIOR-1K, which aims to enable robots to perform 1,000 household chores, including picking up fallen objects or cooking. To train the robots, they’re using the NVIDIA Omniverse platform, as well as reinforcement and imitation learning techniques. Listen to hear more about the breakthroughs and challenges Li and Wong experienced along the way.

Stay tuned for more AI Podcast episodes recorded live from GTC.

Time Stamps

3:33: Background on the BEHAVIOR-1K project

5:00: Why use a simulated environment to train robots? 

6:48: Why build a new simulation engine instead of using an existing one? 

10:48: The process of training the robots to perform household chores

14:04: Some of the most complex tasks taught to the robots

19:07: How are large language models and large vision models affecting the progress of robotics?

24:09: What’s next for the project?  

You Might Also Like…

NVIDIA’s Annalamai Chockalingam on the Rise of LLMs – Ep. 206

Generative AI and large language models (LLMs) are stirring change across industries — but according to NVIDIA Senior Product Manager of Developer Marketing Annamalai Chockalingam, “we’re still in the early innings.” In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Chockalingam about LLMs: what they are, their current state and their future potential.

How GluxKind Created Ella, the AI-Powered Smart Stroller – Ep. 193

Imagine a stroller that can drive itself, help users up hills, brake on slopes and provide alerts of potential hazards. That’s what GlüxKind has done with Ella, an award-winning smart stroller that uses the NVIDIA Jetson edge AI and robotics platform to power its AI features.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments – Ep. 151

Machines have long played games – think of Deep Blue or AlphaGo. Now they’re building them. GANTheftAuto creator Harrison Kinsley talks about his creation on the latest episode of the AI Podcast.

NVIDIA’s Liila Torabi Talks the New Era of Robotics Through Isaac Sim – Ep. 147

Robots are not just limited to the assembly line. At NVIDIA, Liila Torabi works on making the next generation of robotics possible. Torabi is the senior product manager for Isaac Sim, a robotics and AI simulation platform powered by NVIDIA Omniverse. Torabi spoke with NVIDIA AI Podcast host Noah Kravitz about the new era of robotics, one driven by making robots smarter through AI.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Play, Amazon Music, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

Read More

NVIDIA Scoops Up Wins at COMPUTEX Best Choice Awards

NVIDIA Scoops Up Wins at COMPUTEX Best Choice Awards

Building on more than a dozen years of stacking wins at the COMPUTEX trade show’s annual Best Choice Awards, NVIDIA was today honored with BCAs for its latest technologies.

The NVIDIA GH200 Grace Hopper Superchip won the Computer and System Category Award; the NVIDIA Spectrum-X AI Ethernet networking platform won the Networking and Communication Category Award; and the NVIDIA AI Enterprise software platform won a Golden Award.

The awards — judged on the functionality, innovation and market potential of products exhibited at the leading computer and technology expo — were announced ahead of the show, which runs from June 4-7, in Taipei.

NVIDIA founder and CEO Jensen Huang will deliver a COMPUTEX keynote address on Sunday, June 2, at 7 p.m. Taiwan time, at the NTU Sports Center and online.

NVIDIA AI Enterprise Takes Gold

NVIDIA AI Enterprise — a cloud-native software platform that streamlines the development and deployment of copilots and other generative AI applications — won a Golden Award.

The platform lifts the burden of maintaining and securing complex AI software, so businesses can focus on building and harnessing the technology’s game-changing insights.

Microservices that come with NVIDIA AI Enterprise — including NVIDIA NIM and NVIDIA CUDA-X — optimize model performance and run anywhere with enterprise-grade security, support and stability, offering users a smooth transition from prototype to production.

Plus, the platform’s ability to improve AI performance results in better overall utilization of computing resources. This means companies using NVIDIA AI Enterprise need fewer servers to support the same workloads, greatly reducing their energy costs and data center footprint.

More BCA Wins for NVIDIA Technologies

NVIDIA GH200 and Spectrum-X were named best in their respective categories.

The NVIDIA GH200 Grace Hopper Superchip is the world’s first truly heterogeneous accelerated platform for AI and high-performance computing workloads. It combines the power-efficient NVIDIA Grace CPU with an NVIDIA Hopper architecture-based GPU over a high-bandwidth 900GB/s coherent NVIDIA NVLink chip-to-chip interconnect.

The superchip — shipping worldwide and powering more than 40 AI supercomputers across global research centers, system makers and cloud providers — supercharges scientific innovation with accelerated computing and scale-out solutions for AI inference, large language models, recommenders, vector databases, HPC applications and more.

The Spectrum-X platform, featuring NVIDIA Spectrum SN5600 switches and NVIDIA BlueField-3 SuperNICs, is the world’s first Ethernet fabric built for AI, accelerating generative AI network performance 1.6x over traditional Ethernet fabrics.

It can serve as the backend AI fabric for any AI cloud or large enterprise deployment, and is available from major server manufacturers as part of the full NVIDIA AI stack.

NVIDIA Partners Recognized

Other BCA winners include NVIDIA partners Acer, ASUS, MSI and YUAN, which were given Golden Awards for their respective laptops, gaming motherboards and smart-city applications — all powered by NVIDIA technologies, such as NVIDIA GeForce RTX 4090 GPUs, the NVIDIA Studio platform for creative workflows and the NVIDIA Jetson platform for edge AI and robotics.

ASUS also won a Computer and System Category Award, while MSI won a Gaming and Entertainment Category Award.

Learn more about the latest generative AI, HPC and networking technologies by joining NVIDIA at COMPUTEX.

Read More

Into the Omniverse: SoftServe and Continental Drive Digitalization With OpenUSD and Generative AI

Into the Omniverse: SoftServe and Continental Drive Digitalization With OpenUSD and Generative AI

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

Industrial digitalization is driving automotive innovation.

In response to the industry’s growing demand for seamless, connected driving experiences, SoftServe, a leading IT consulting and digital services provider, worked with Continental, a leading German automotive technology company, to develop Industrial Co-Pilot, a virtual agent powered by generative AI that enables engineers to streamline maintenance workflows.

SoftServe helps manufacturers like Continental to further optimize their operations by integrating the Universal Scene Description, or OpenUSD, framework into virtual factory solutions — such as Industrial Co-Pilot — developed on the NVIDIA Omniverse platform.

OpenUSD offers the flexibility and extensibility organizations need to harness the full potential of digital transformation, streamlining operations and driving efficiency. Omniverse is a platform of application programming interfaces, software development kits and services that enable developers to easily integrate OpenUSD and NVIDIA RTX rendering technologies into existing software tools and simulation workflows.

Realizing the Benefits of OpenUSD

SoftServe and Continental’s Industrial Co-Pilot brings together generative AI and immersive 3D visualization to help factory teams increase productivity during equipment and production line maintenance. With the copilot, engineers can oversee production lines and monitor the performance of individual stations or the shop floor.

They can also interact with the copilot to conduct root cause analysis and receive step-by-step work instructions and recommendations, leading to reduced documentation processes and improved maintenance procedures. It’s expected that these advancements will contribute to increased productivity and a10% reduction in maintenance effort and downtime.

In a recent Omniverse community livestream, Benjamin Huber, who leads advanced automation and digitalization in the user experience business area at Continental, highlighted the significance of the company’s collaboration with SoftServe and its adoption of Omniverse.

The Omniverse platform equips Continental and SoftServe developers with the tools needed to build a new era of AI-enabled industrial applications and services. And by breaking down data silos and fostering multi-platform cooperation with OpenUSD, SoftServe and Continental developers enable engineers to work seamlessly across disciplines and systems, driving efficiency and innovation throughout their processes.

“Any engineer, no matter what tool they’re working with, can transform their data into OpenUSD and then interchange data from one discipline to another, and from one tool to another,” said Huber.

This sentiment was echoed by Vasyl Boliuk, senior lead and test automation engineer at SoftServe, who shared how OpenUSD and Omniverse — along with other NVIDIA technologies like NVIDIA Riva, NVIDIA NeMo and NVIDIA NIM microservices — enabled SoftServe and Continental teams to develop custom large language models and connect them to new 3D workflows.

“OpenUSD allows us to add any attribute or any piece of metadata we want to our applications,” he said.

Boliuk, Huber and other SoftServe and Continental representatives joined the livestream to share more about the potential unlocked from these OpenUSD-powered solutions. Watch the replay:

By embracing cutting-edge technologies and fostering collaboration, SoftServe and Continental are helping reshape automotive manufacturing.

Get Plugged Into the World of OpenUSD

Watch SoftServe and Continental’s on-demand NVIDIA GTC talks to learn more about their virtual factory solutions and experience developing on NVIDIA Omniverse with OpenUSD:

Learn about the latest technologies driving the next industrial revolution by watching NVIDIA founder and CEO Jensen Huang’s COMPUTEX keynote on Sunday, June 2, at 7 p.m. Taiwan time.

Check out a new video series about how OpenUSD can improve 3D workflows. For more resources on OpenUSD, explore the Alliance for OpenUSD forum and visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license free, access OpenUSD resources and learn how Omniverse Enterprise can connect teams. Follow Omniverse on Instagram, Medium and X. For more, join the Omniverse community on the forums, Discord server, Twitch and YouTube channels. 

Featured image courtesy of SoftServe.

Read More

Senua’s Story Continues: GeForce NOW Brings ‘Senua’s Saga: Hellblade II’ to the Cloud

Senua’s Story Continues: GeForce NOW Brings ‘Senua’s Saga: Hellblade II’ to the Cloud

Every week, GFN Thursday brings new games to the cloud, featuring some of the latest and greatest titles for members to play.

Leading the seven games joining GeForce NOW this week is the newest game in Ninja Theory’s Hellblade franchise, Senua’s Saga: Hellblade II. This day-and-date release expands the cloud gaming platform’s extensive library of over 1,900 games.

Members can also look forward to a new reward — a free in-game mount — for The Elder Scrolls Online starting Thursday, May 30. Get ready by opting into GeForce NOW’s Rewards program.

Senua Returns

Senua's Saga: Hellblade II screen
Head to the cloud to overcome the darkness.

In Senua’s Saga: Hellblade II, the sequel to the award-winning Hellblade: Senua’s Sacrifice, Senua returns in a brutal journey of survival through the myth and torment of Viking Iceland.

Intent on saving those who’ve fallen victim to the horrors of tyranny, Senua battles the forces of darkness within and without. Sink deep into the next chapter of Senua’s story, a crafted experience told through cinematic immersion, beautifully realized visuals and encapsulating sound.

Priority and Ultimate members can fully immerse themselves in Senua’s story with epic cinematic gameplay at higher resolutions and frame rates over free members. Ultimate members can stream at up to 4K 120 frames per second with exclusive access to GeForce RTX 4080 SuperPODs in the cloud, even on underpowered devices.

Level Up With New Games

Check out the full list of new games this week:

  • Synergy (New release on Steam, May 21)
  • Senua’s Saga: Hellblade II (New release on Steam and Xbox, available on PC Game Pass, May 21)
  • Crown Wars: The Black Prince (New release on Steam, May 23)
  • Serum (New release on Steam, May 23)
  • Ships at Sea (New release on Steam, May 23)
  • Exo One (Steam)
  • Phantom Brigade (Steam)

What are you planning to play this weekend? Let us know on X or in the comments below

Read More