Pittsburgh Steels Itself for Innovation With Launch of NVIDIA AI Tech Community

Pittsburgh Steels Itself for Innovation With Launch of NVIDIA AI Tech Community

Serving as a bridge for academia, industry and public-sector groups to partner on artificial intelligence innovation, NVIDIA is launching its inaugural AI Tech Community in Pittsburgh, Pennsylvania.

Collaborations with Carnegie Mellon University and the University of Pittsburgh, as well as startups, enterprises and organizations based in the “city of bridges,” are part of the new NVIDIA AI Tech Community initiative, announced today during the NVIDIA AI Summit in Washington, D.C.

The initiative aims to supercharge public-private partnerships across communities rich with potential for enabling technological transformation using AI.

Two NVIDIA joint technology centers will be established in Pittsburgh to tap into expertise in the region.

NVIDIA’s Joint Center with Carnegie Mellon University (CMU) for Robotics, Autonomy and AI will equip higher-education faculty, students and researchers with the latest technologies and boost innovation in the fields of AI and robotics.

NVIDIA’s Joint Center with the University of Pittsburgh for AI and Intelligent Systems will focus on computational opportunities across the health sciences, including applications of AI in clinical medicine and biomanufacturing.

CMU — the nation’s No. 1 AI university according to the U.S. News & World Report — has pioneered work in autonomous vehicles and natural language processing.

CMU’s Robotics Institute, the world’s largest university-affiliated robotics research group, brings a diverse group of more than a thousand faculty, staff, students, post-doctoral fellows and visitors together to solve humanity’s toughest challenges through robotics.

The University of Pittsburgh — designated as an R1 research university at the forefront of innovation — is ranked No. 6 among U.S. universities in research funding from the National Institutes of Health, topping more than $1 billion in research expenditures in fiscal year 2022 and ranking No. 14 among U.S. universities granted utility patents.

The university has a long history of learning-technology innovations that are interdisciplinary and conducted within research-practice partnerships. By prioritizing inclusivity and practical experience without technical barriers, Pitt is leading the way in democratizing AI education in healthcare and medicine.

By working with these universities, NVIDIA aims to accelerate the innovation, commercialization and operationalization of a technical community for physical AI, robotics, autonomous systems and AI across the nation — and the globe.

These centers will tap into NVIDIA’s full-stack AI platform and accelerated computing expertise to gear up tomorrow’s technology leaders for next-generation innovation.

Establishing the Centers for AI Development 

Generative AI and accelerated computing are transforming workflows across use cases. Three key AI platforms comprise the engine behind this transformation: NVIDIA DGX for AI training, NVIDIA Omniverse for simulation and NVIDIA Jetson for edge computing.

Through the new centers and public-sector-sponsored research opportunities, NVIDIA will provide CMU and Pitt with access to these and more of the company’s latest AI software and frameworks — such as NVIDIA Isaac Lab for robot learning, NVIDIA Isaac Sim for designing and testing robots, NVIDIA NeMo for custom generative AI and NVIDIA NIM microservices, available through the NVIDIA AI Enterprise software platform.

Advanced NVIDIA technological support can help accelerate the research groups’ workflows and enhance the scalability and resiliency of their AI applications.

In addition, the universities will have access to certain generative AI, data science and accelerated computing resources through the NVIDIA Deep Learning Institute, which provides training to meet diverse learning needs and upskill students and developers in AI.

“Pairing Carnegie Mellon University’s existing deep expertise and resources in AI and robotics with NVIDIA’s cutting-edge platform, software and tools has tremendous potential to power Pittsburgh’s already vibrant innovation ecosystem,” said Theresa Mayer, vice president for research at CMU. “This unique collaboration will accelerate innovation, commercialization and operationalization of robotics and autonomy, advancing the best impacts of AI on society.”

“Pitt has a long history and extraordinary research strengths in life sciences and learning sciences,” said Rob A. Rutenbar, senior vice chancellor for research at the University of Pittsburgh. “By focusing on computational and AI opportunities across these ‘meds and eds’ areas, we plan to leverage our collaboration with NVIDIA to explore new ways to connect these breakthroughs to improved health and education outcomes for everybody.”

Fostering Cross-Industry Collaboration

As part of the AI Tech Community initiative, NVIDIA is also increasing its engagement with Pittsburgh-based members of the NVIDIA Inception program for cutting-edge AI startups and the NVIDIA Connect program for software development companies and service providers.

For example, Inception member Lovelace AI is developing AI solutions using NVIDIA accelerated computing and CUDA to enhance the analysis of kinetic data, providing predictive analytics and actionable insights for national security customers.

Skild AI, a startup founded by two Carnegie Mellon professors, is developing a scalable robotics foundation model, called Skild Brain, that can easily adapt across hardware and tasks.

Skild AI is exploring NVIDIA Isaac Lab, a unified, modular framework for robot learning built on the NVIDIA Isaac Sim reference application for designing, simulating and training AI-based robots.

NVIDIA is also engaging with Pittsburgh’s broader robotics ecosystem through its collaborations with the Pittsburgh Robotics Network — which speeds the commercialization of robotics, AI and other advanced technologies — and technology accelerators like AlphaLab and the Robotics Factory at Innovation Works, which supports startups based in the city that are focused on AI, robotics and autonomy.

And through its Deep Learning Institute, which has trained more than 650,000 people, NVIDIA is committed to furthering AI workforce development worldwide.

Learn more about how NVIDIA is propelling the next era of computing in higher education and research, including at the NVIDIA AI Summit, running through Oct. 9. NVIDIA Vice President of Developer Programs Greg Estes will discuss scaling AI skills and economic growth through public-private collaboration.

Featured image courtesy of Wikimedia Commons.

Read More

Foxconn to Build Taiwan’s Fastest AI Supercomputer With NVIDIA Blackwell

Foxconn to Build Taiwan’s Fastest AI Supercomputer With NVIDIA Blackwell

NVIDIA and Foxconn are building Taiwan’s largest supercomputer, marking a milestone in the island’s AI advancement.

The project, Hon Hai Kaohsiung Super Computing Center, revealed Tuesday at Hon Hai Tech Day, will be built around NVIDIA’s groundbreaking Blackwell architecture and feature the GB200 NVL72 platform, which includes a total of 64 racks and 4,608 Tensor Core GPUs.

With an expected performance of over 90 exaflops of AI performance, the machine would easily be considered the fastest in Taiwan.

Foxconn plans to use the supercomputer, once operational, to power breakthroughs in cancer research, large language model development and smart city innovations, positioning Taiwan as a global leader in AI-driven industries.

Foxconn’s “three-platform strategy” focuses on smart manufacturing, smart cities and electric vehicles. The new supercomputer will play a pivotal role in supporting Foxconn’s ongoing efforts in digital twins, robotic automation and smart urban infrastructure, bringing AI-assisted services to urban areas like Kaohsiung.

Construction has started on the new supercomputer housed in Kaohsiung, Taiwan. The first phase is expected to be operational by mid-2025. Full deployment is targeted for 2026.

The project will integrate with NVIDIA technologies, such as  NVIDIA Omniverse and Isaac robotics platforms for AI and digital twins technologies to help transform manufacturing processes.

“Powered by NVIDIA’s Blackwell platform, Foxconn’s new AI supercomputer is one of the most powerful in the world, representing a significant leap forward in AI computing and efficiency,” said Foxconn Vice President and Spokesperson James Wu.

The GB200 NVL72 is a state-of-the-art data center platform optimized for AI and accelerated computing.

Each rack features 36 NVIDIA Grace CPUs and 72 NVIDIA Blackwell GPUs connected via NVIDIA’s NVLink technology, delivering 130TB/s of bandwidth.

NVIDIA NVLink Switch allows the 72-GPU system to function as a single, unified GPU. This makes it ideal for training large AI models and executing complex inference tasks in real time on trillion-parameter models.

Taiwan-based Foxconn, officially known as Hon Hai Precision Industry Co., is the world’s largest electronics manufacturer, known for producing a wide range of products, from smartphones to servers, for the world’s top technology brands.

With a vast global workforce and manufacturing facilities across the globe, Foxconn is key in supplying the world’s technology infrastructure. It is a leader in smart manufacturing as one of the pioneers of industrial AI as it digitalizes its factories in NVIDIA Omniverse.

Foxconn was also one of the first companies to use NVIDIA NIM microservices in the development of domain-specific large language models, or LLMs, embedded into a variety of internal systems and processes in its AI factories for smart manufacturing, smart electric vehicles and smart cities.

The Hon Hai Kaohsiung Super Computing Center is part of a growing global network of advanced supercomputing facilities powered by NVIDIA. This network includes several notable installations across Europe and Asia.

These supercomputers represent a significant leap forward in computational power, putting NVIDIA’s cutting-edge technology to work to advance research and innovation across various scientific disciplines.

Learn more about Hon Hai Tech Day.

Read More

No Tricks, Just Games: GeForce NOW Thrills With 22 Games in October

No Tricks, Just Games: GeForce NOW Thrills With 22 Games in October

The air is crisp, the pumpkins are waiting to be carved, and GFN Thursday is ready to deliver some gaming thrills.

GeForce NOW is unleashing a monster mash of gaming goodness this October with 22 titles joining the cloud, with five available for members to stream this week. From pulse-pounding action to immersive role-playing games, members’ cloud gaming cauldrons are about to bubble over with excitement. Plus, a new account portal update lets members take a look at their playtime details and history on GeForce NOW.

October Treats in Store

GeForce NOW is offering plenty of treats for members this month, starting with the launch of THRONE AND LIBERTY this week.

THRONE AND LIBERTY on GeForce NOW
Unite the realms across devices.

THRONE AND LIBERTY is a free-to-play massively multiplayer online role-playing game that takes place in the vast open world of Solisium. Scale expansive mountain ranges for new vantage points, scan open skies, traverse sprawling plains and explore a land full of depth and opportunity.

Adapt to survive and thrive through strategic decisions in player vs. player or player vs. environment combat modes while navigating evolving battlefields impacted by weather, time of day and other players. There’s no single path to victory to defeat Kazar and claim the throne while keeping rival guilds at bay.

Look for the following games available to stream in the cloud this week:

  • THRONE AND LIBERTY (New release on Steam, Oct. 1)
  • Sifu (Available on PC Game Pass, Oct. 2)
  • Bear and Breakfast (Free on Epic Games Store, Oct. 3)
  • Monster Jam Showdown (Steam)
  • TerraTech Worlds (Steam)

Here’s what members can expect for the rest of October:

  • Europa (New release on Steam, Oct. 11)
  • Neva (New release on Steam, Oct. 15)
  • MechWarrior 5: Clans (New release on Steam and Xbox, Oct. 16)
  • A Quiet Place: The Road Ahead (New release on Steam, Oct. 17)
  • Worshippers of Cthulhu (New release on Steam, Oct. 21)
  • No More Room in Hell 2 (New release on Steam, Oct. 22)
  • Romancing SaGa 2: Revenge of the Seven (New release on Steam, Oct. 24)
  • Call of Duty: Black Ops 6 (New release on Steam and Battle.net, Oct. 25)
  • Life Is Strange: Double Exposure (New release on Steam and Xbox, available in the Microsoft store, Oct. 29)
  • Artisan TD (Steam) 
  • ASKA (Steam)
  • DUCKSIDE (Steam)
  • Dwarven Realms (Steam)
  • Selaco (Steam)
  • Spirit City: Lofi Sessions (Steam)
  • Starcom: Unknown Space (Steam)
  • Star Trek Timelines (Steam)

Surprises in September

In addition to the 18 games announced last month, 12 more joined the GeForce NOW library:

  • Warhammer 40,000: Space Marine 2 (New release on Steam, Sept. 9)
  • Dead Rising Deluxe Remaster (New release on Steam, Sept. 18)
  • Witchfire (New release on Steam, Sept. 23)
  • Monopoly (New release on Ubisoft Connect, Sept. 26)
  • Dawn of Defiance (Steam)
  • Flintlock: The Siege of Dawn (Xbox, available on PC Game Pass)
  • Fort Solis (Epic Games Store)
  • King Arthur: Legion IX (Steam)
  • The Legend of Heroes: Trails Through Daybreak (Steam)
  • Squirrel With a Gun (Steam)
  • Tyranny – Gold Edition (Xbox, available on Microsoft Store)
  • XIII (Xbox, available on Microsoft Store)

Blacksmith Simulator didn’t make it in September as the game’s launch was moved to next year.

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

How AI and Accelerated Computing Drive Energy Efficiency

How AI and Accelerated Computing Drive Energy Efficiency

AI isn’t just about building smarter machines. It’s about building a greener world.

From optimizing energy use to reducing emissions, artificial intelligence and accelerated computing are helping industries tackle some of the world’s toughest environmental challenges.

As Joshua Parker, NVIDIA’s Senior Director of Corporate Sustainability, explains on the latest edition of NVIDIA’s AI Podcast, these technologies are powering a new era of energy efficiency.

Can AI Help Reduce Energy Consumption?

Yes. And it’s doing it in ways that might surprise you.

AI systems themselves use energy—sure—but the big story is how AI and accelerated computing are helping other systems save energy.

Take data centers, for instance.

They’re the backbone of AI, housing the powerful systems that crunch the data needed for AI to work.

Globally, data centers account for about 2% of total energy consumption, and AI-specific centers represent only a tiny fraction of that, Parker explains.

Despite this, AI’s real superpower lies in its ability to optimize.

How? By using accelerated computing platforms that combine GPUs and CPUs.

GPUs (Graphics Processing Units) are designed to handle complex computations quickly and efficiently.

In fact, these systems can be up to 20 times more energy-efficient than traditional CPU-only systems, Parker notes.

That’s not just good for tech companies—it’s good for the environment, too.

What is Accelerated Computing?

At its core, accelerated computing is about doing more with less.

It involves using specialized hardware—like GPUs—to perform tasks faster and with less energy.

This isn’t just theoretical. Over the last eight years, AI systems running on accelerated computing platforms have become 45,000 times more energy-efficient, Parker said.

That’s a staggering leap in performance, driven by improvements in both hardware and software.

So why does this matter? It matters because, as AI becomes more widespread, the demand for computing power grows.

Accelerated computing helps companies scale their AI operations without consuming massive amounts of energy. This energy efficiency is key to AI’s ability to tackle some of today’s biggest sustainability challenges.

AI in Action: Tackling Climate Change

AI isn’t just saving energy—it’s helping to fight climate change.

For instance, AI-enhanced weather forecasting is becoming more accurate, allowing industries and governments to prepare for climate-related events like hurricanes or floods, Parker explains.

The better we can predict these events, the better we can prepare for them, which means fewer resources wasted and less damage done.

Another key area is the rise of digital twins—virtual models of physical environments.

These AI-powered simulations allow companies to optimize energy consumption in real-time, without having to make costly changes in the physical world.

In one case, using a digital twin helped a company achieve a 10% reduction in energy use, Parker said. That may sound small, but scale it across industries and the impact is huge.

AI is also playing a crucial role in developing new materials for renewable energy technologies like solar panels and electric vehicles, accelerating the transition to clean energy.

Can AI Make Data Centers More Sustainable?

Here’s the thing: AI needs data centers to operate, and as AI grows, so does the demand for computing power. But data centers don’t have to be energy hogs.

In fact, they can be part of the sustainability solution.

One major innovation is direct-to-chip liquid cooling. This technology allows data centers to cool their systems much more efficiently than traditional air conditioning methods, which are often energy-intensive.

By cooling directly at the chip level, this method saves energy, helping data centers stay cool without guzzling power, Parker explains.

As AI scales up, the future of data centers will depend on designing for energy efficiency from the ground up. That means integrating renewable energy, using energy storage solutions, and continuing to innovate with cooling technologies.

The goal is to create green data centers that can meet the world’s growing demand for compute power without increasing their carbon footprint, Parker says.

The Role of AI in Building a Sustainable Future

AI is not just a tool for optimizing systems—it’s a driver of sustainable innovation. From improving the efficiency of energy grids to enhancing supply chain logistics, AI is leading the charge in reducing waste and emissions.

Let’s look at energy grids. AI can monitor and adjust energy distribution in real-time, ensuring that resources are allocated where they’re needed most, reducing waste.

This is particularly important as the world moves toward renewable energy, which can be less predictable than traditional sources like coal or natural gas, Parker said.

AI is also helping industries reduce their carbon footprints. By optimizing routes and predicting demand more accurately, AI can cut down on fuel use and emissions in logistics and transportation sectors.

Looking to the future, AI’s role in promoting sustainability is only going to grow.

As technologies become more energy-efficient and AI applications expand, we can expect AI to play a crucial role in helping industries meet their sustainability goals, Parker said.

It’s not just about making AI greener—it’s about using AI to make the world greener.

AI and accelerated computing are reshaping how we think about energy and sustainability.

With their ability to optimize processes, reduce energy waste, and drive innovations in clean technology, these technologies are essential tools for creating a sustainable future.

As Parker explains on NVIDIA’s AI Podcast, AI’s potential to save energy and combat climate change is vast—and we’re only just beginning to tap into it.

As AI continues to revolutionize industries and drive sustainability, there’s no better time to dive deeper into its transformative potential. If you’re eager to explore how AI and accelerated computing are shaping the future of energy efficiency and climate solutions, join us at the NVIDIA AI Summit.

📅Event Date: October 9, 2024
🔗 Register here and gain exclusive insights into the innovations that are powering a sustainable world.

Don’t miss your chance to learn from the leading minds in AI and sustainability. Let’s create a greener future together.

Read More

Brave New World: Leo AI and Ollama Bring RTX-Accelerated Local LLMs to Brave Browser Users

Brave New World: Leo AI and Ollama Bring RTX-Accelerated Local LLMs to Brave Browser Users

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for GeForce RTX PC and NVIDIA RTX workstation users.

From games and content creation apps to software development and productivity tools, AI is increasingly being integrated into applications to enhance user experiences and boost efficiency.

Those efficiency boosts extend to everyday tasks, like web browsing. Brave, a privacy-focused web browser, recently launched a smart AI assistant called Leo AI that, in addition to providing search results, helps users summarize articles and videos, surface insights from documents, answer questions and more.

Leo AI helps users summarize articles and videos, surface insights from documents, answer questions and more.

The technology behind Brave and other AI-powered tools is a combination of hardware, libraries and ecosystem software that’s optimized for the unique needs of AI.

Why Software Matters

NVIDIA GPUs power the world’s AI, whether running in the data center or on a local PC. They contain Tensor Cores, which are specifically designed to accelerate AI applications like Leo AI through massively parallel number crunching — rapidly processing the huge number of calculations needed for AI simultaneously, rather than doing them one at a time.

But great hardware only matters if applications can make efficient use of it. The software running on top of GPUs is just as critical for delivering the fastest, most responsive AI experience.

The first layer is the AI inference library, which acts like a translator that takes requests for common AI tasks and converts them to specific instructions for the hardware to run. Popular inference libraries include NVIDIA TensorRT, Microsoft’s DirectML and the one used by Brave and Leo AI via Ollama, called llama.cpp.

Llama.cpp is an open-source library and framework. Through CUDA — the NVIDIA software application programming interface that enables developers to optimize for GeForce RTX and NVIDIA RTX GPUs — provides Tensor Core acceleration for hundreds of models, including popular large language models (LLMs) like Gemma, Llama 3, Mistral and Phi.

On top of the inference library, applications often use a local inference server to simplify integration. The inference server handles tasks like downloading and configuring specific AI models so that the application doesn’t have to.

Ollama is an open-source project that sits on top of llama.cpp and provides access to the library’s features. It supports an ecosystem of applications that deliver local AI capabilities. Across the entire technology stack, NVIDIA works to optimize tools like Ollama for NVIDIA hardware to deliver faster, more responsive AI experiences on RTX.

NVIDIA’s focus on optimization spans the entire technology stack — from hardware to system software to the inference libraries and tools that enable applications to deliver faster, more responsive AI experiences on RTX.

Local vs. Cloud

Brave’s Leo AI can run in the cloud or locally on a PC through Ollama.

There are many benefits to processing inference using a local model. By not sending prompts to an outside server for processing, the experience is private and always available. For instance, Brave users can get help with their finances or medical questions without sending anything to the cloud. Running locally also eliminates the need to pay for unrestricted cloud access. With Ollama, users can take advantage of a wider variety of open-source models than most hosted services, which often support only one or two varieties of the same AI model.

Users can also interact with models that have different specializations, such as bilingual models, compact-sized models, code generation models and more.

RTX enables a fast, responsive experience when running AI locally. Using the Llama 3 8B model with llama.cpp, users can expect responses up to 149 tokens per second — or approximately 110 words per second. When using Brave with Leo AI and Ollama, this means snappier responses to questions, requests for content summaries and more.

NVIDIA internal throughput performance measurements on NVIDIA GeForce RTX GPUs, featuring a Llama 3 8B model with an input sequence length of 100 tokens, generating 100 tokens.

Get Started With Brave With Leo AI and Ollama

Installing Ollama is easy — download the installer from the project’s website and let it run in the background. From a command prompt, users can download and install a wide variety of supported models, then interact with the local model from the command line.

For simple instructions on how to add local LLM support via Ollama, read the company’s blog. Once configured to point to Ollama, Leo AI will use the locally hosted LLM for prompts and queries. Users can also switch between cloud and local models at any time.

Brave with Leo AI running on Ollama and accelerated by RTX is a great way to get more out of your browsing experience. You can even summarize and ask questions about AI Decoded blogs!

Developers can learn more about how to use Ollama and llama.cpp in the NVIDIA Technical Blog.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

NVIDIA AI Summit DC: Industry Leaders Gather to Showcase AI’s Real-World Impact

NVIDIA AI Summit DC: Industry Leaders Gather to Showcase AI’s Real-World Impact

Washington, D.C., is where possibility has always met policy, and AI presents unparalleled opportunities for tackling global challenges.

NVIDIA’s AI Summit in Washington, set for October 7-9, will gather industry leaders to explore how AI addresses some of society’s most significant challenges.

Held at the Ronald Reagan Building and JW Marriott in the heart of the nation’s capital, the event will focus on the potential of AI to drive breakthroughs in healthcare, cybersecurity, manufacturing and more.

Attendees will hear from industry leaders in 50 sessions, live demos and hands-on workshops covering such topics as generative AI, remote sensing, cybersecurity, robotics and industrial digitalization.

Key Speakers and Sessions

Throughout the conference, speakers will touch on sustainability, economic development and AI for good.

A highlight of the event is the special address by Bob Pette, vice president of enterprise platforms at NVIDIA, on October 8.

Pette will explain how NVIDIA’s accelerated computing platform enables advancements in sensor processing, autonomous systems, and digital twins. These AI applications offer wide-reaching benefits across industries.

Following Pette’s keynote, Greg Estes, vice president of corporate marketing and developer programs at NVIDIA, will discuss how the company’s AI platform is empowering millions of developers worldwide.

Estes will provide insights into NVIDIA’s workforce development programs, which are designed to prepare the next generation of AI talent through hands-on training and certifications.

He’ll spotlight NVIDIA’s extensive training initiatives, including those offered at the AI Summit and throughout the year via the NVIDIA Deep Learning Institute, emphasizing how these programs are equipping individuals with the critical skills needed in the AI-driven economy.

Estes will also share examples of successful collaborations with federal and state governments, as well as educational institutions, that are helping to expand AI education and workforce development efforts.

In addition, Estes will highlight opportunities for organizations to partner with NVIDIA in broadening AI training and reskilling initiatives, ensuring that more professionals can contribute to and benefit from the rapid advancements in AI technology.

Other notable speakers include Lisa Einstein, chief AI scientist at the Cybersecurity and Infrastructure Security Agency (CISA), who will offer an executive perspective in her session, “Navigating the Future of Cyber Operations with AI.”

This session will provide critical insights into how AI is transforming the landscape of cyber operations and securing national infrastructure.

Additionally, Sheri Bachstein, CEO of The Weather Company, will focus on how AI-driven tools are addressing environmental challenges like climate monitoring, while Helena Fu, director at the U.S. Department of Energy, will speak to the role of AI in bolstering national security and advancing sustainable technologies.

Breakthroughs and Demos

With more than 60 sessions planned, the summit will explore critical topics such as generative AI, sustainable computing and AI policy.

Key sessions include Kari Briski, vice president of generative AI software product management at NVIDIA, discussing the impact of NVIDIA’s generative AI platform across industries, and Rev Lebaredian, vice president of Omniverse and simulation technology, covering the future of physical AI, robotics and autonomy.

Renee Wegrzyn, director of ARPA-H, and Rory Kelleher, who leads global business development for healthcare and life sciences at NVIDIA, will delve into AI-enabled healthcare, while Tanya Das, from the Bipartisan Policy Center, will examine how AI can drive scientific discovery, economic growth and national security.

Live demos will showcase groundbreaking innovations such as NVIDIA’s Earth-2, a climate forecasting tool, alongside advancements in quantum computing and robotics. A panel of NVIDIA experts, including Nikki Pope and Leon Derczynski, will address the tools ensuring safe and responsible AI deployment.

Hands-on technical workshops will offer attendees opportunities to earn certifications in data science, generative AI and other essential skills for the future workforce.

These sessions will provide participants with the tools needed to help Americans thrive in an AI-driven economy, enhancing productivity and creating new career opportunities.

Networking and Industry Partnerships

The summit will feature over 95 sponsors, including Microsoft, Dell and Lockheed Martin, showcasing how AI is transforming industries.

Attendees will be able to engage with these partners in the expo hall and explore how AI solutions are being implemented to drive positive change in the public and private sectors.

Whether attending in person or virtually, the NVIDIA AI Summit will provide insights into how AI is contributing to solutions for today’s most significant challenges.

Read More

Bon Voyage: NIO Unveils ONVO L60 Smart Electric SUV, Built on NVIDIA DRIVE Orin

Bon Voyage: NIO Unveils ONVO L60 Smart Electric SUV, Built on NVIDIA DRIVE Orin

NIO’s smart EV brand, ONVO, has unveiled the L60 flagship mid-size family SUV, built on the NVIDIA DRIVE Orin system-on-a-chip.

Earlier this year, the automaker introduced the ONVO brand — which stands for On Voyage — to reinforce its commitment to bringing safer, smarter, and more enjoyable and affordable mobility solutions for the mainstream family market.

NVIDIA DRIVE Orin serves as the AI brain of ONVO’s smart-driving system — known as OSD — and delivers up to 254 trillion operations per second of high-performance compute. This allows for diverse and redundant processing of sensor data from the L60’s vision-based sensor suite of high-definition cameras (with maximum forward detection of 687 meters) and 4D radar (with maximum detection range of 370 meters).

The automotive-grade NVIDIA DRIVE Orin runs NVIDIA DriveOS, an operating system for safe, AI-defined vehicles, and is widely used by leading global automakers, including in NIO’s ET7 sedan and its ET5 and ES7 models.

NVIDIA DRIVE Orin enables highly automated driver assistance and autonomous driving systems, along with other features that can be updated via over-the-air software updates.

Bold Design Built to Go the Distance

The ONVO L60 embodies NIO’s commitment to innovative design and user-centric features, blending sleek style with cutting-edge technology.

The L60 was developed to elevate driving experiences by incorporating six critical features: comprehensive safety, spacious comfort, smart cabins with immersive digital experiences, impressive mileage range with convenient recharging, superior ride and handling, and advanced assisted driving capabilities.

The L60 base model (which costs approximately ¥149, 900 yuan, equivalent to $21,000) comes without a battery, allowing users to opt for NIO’s battery-as-a-service option, which is used by 70% of NIO customers. ONVO vehicles are compatible with NIO’s battery swap network, which includes more than 2,500 Power Swap Stations and is expected to continue expanding throughout China.

The launch of the ONVO L60 marks the latest in NIO and NVIDIA’s decade of collaboration. With the integration of NVIDIA DRIVE, the ONVO L60 is poised to deliver an advanced driving experience at an affordable price.

Read More

A Whole New World: ‘GreedFall II: The Dying World’ Joins GeForce NOW

A Whole New World: ‘GreedFall II: The Dying World’ Joins GeForce NOW

Whether looking for a time-traveling adventure, strategic roleplay or epic action, anyone can find something to play on GeForce NOW, with over 2,000 games in the cloud.

The GeForce NOW library continues to grow with seven titles arriving this week, including the role-playing game GreedFall II: The Dying World from developer Spiders and publisher Nacon.

Plus, be sure to claim the new in-game reward for Guild Wars 2 for extra style points.

GeForce NOW is improving experiences for members using Windows on Arm laptops. Support for these products is currently in beta, and improvements will be included in the GeForce NOW 2.0.67 app update, rolling out this week to bring GeForce NOW streaming at up to 4K resolution, 120 frames per second and high dynamic range to Arm-based laptops.

Greed Is Good

Greedfall II on GeForce NOW
Greed falls, frame rates rise in the cloud.

GreedFall II: The Dying World, the sequel to the acclaimed GreedFall, transports players to a captivating world set three years before the events of the original game. It features a revamped combat system, offering players enhanced control over Companions, and introduces a tactic pause feature during live battles for strategic planning. In this immersive adventure, step into the shoes of a person native to the magical archipelago uprooted from their homeland and thrust into the complex political landscape of the Old Continent. GreedFall II delivers an immersive experience filled with alliances, schemes and intense battles as players navigate the treacherous waters of colonial conflict and supernatural forces.

Members can shape the destiny of the Old Continent all from the cloud. Ultimate and Priority members can elevate their gaming experiences with longer gaming sessions and higher-resolution gameplay over free members. Upgrade today to get immersed in the fight for freedom.

Adventure in Style

The Guild Wars 2: Janthir Wilds expansion is here, bringing new adventures and challenges to explore in the world of Tyria. To celebrate this release, GeForce NOW is offering a special member reward: a unique style bundle to enliven members’ in-game experiences.

Guild Wars II reward on GeForce NOW
So fancy.

Transform characters’ hairstyle, horns and facial hair, customize armor and tailor a wardrobe for epic quests. The reward allows players to stand out as a true champion of Tyria while exploring the new lands of Janthir.

Members enrolled in the GeForce NOW rewards program can check their email for instructions on how to claim the reward. Ultimate and Priority members can redeem their style packages today, and free members can access the reward beginning on Friday, Sept. 27. Don’t miss out — the offer is available through Saturday, Oct. 26, on a first-come, first-served basis.

Something for Everyone

Remnant II DLC on GeForce NOW
The apocalypse never looked so good.

The hit survival action shooter Remnant II from Arc Games this week released its newest and final downloadable content (DLC), The Dark Horizon, along with a free update that brings a brand-new game mode called Boss Rush. In the DLC, players return to N’Erud and uncover a mysterious place preserved in time, where alien farmlands are tended by robots for inhabitants who have long since perished. But time corrupts all, and robotic creations threaten at every turn. Stream the game instantly on GeForce NOW without waiting for downloads or updates.

Members can look for the following games available to stream in the cloud this week:

  • Witchfire (New release on Steam, Sept. 23)
  • Tiny Glade (New release on Steam, Sept. 23)
  • Disney Epic Mickey: Rebrushed (New release on Steam, Sept. 24)
  • GreedFall II: The Dying World (New release on Steam, Sept. 24)
  • Breachway (New release on Steam, Sept. 26)
  • Mechabellum (New release on Steam, Sept. 26)
  • Monopoly (New release on Ubisoft Connect, Sept. 26)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Decoding How AI Can Accelerate Data Science Workflows

Decoding How AI Can Accelerate Data Science Workflows

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX workstation and PC users.

Across industries, AI is driving innovation and enabling efficiencies — but to unlock its full potential, the technology must be trained on vast amounts of high-quality data.

Data scientists play a key role in preparing this data, especially in domain-specific fields where specialized, often proprietary data is essential to enhancing AI capabilities.

To help data scientists with increasing workload demands, NVIDIA announced that RAPIDS cuDF, a library that allows users to more easily work with data, accelerates the pandas software library with zero code changes. Pandas is a flexible, powerful and popular data analysis and manipulation library for the Python programming language. With cuDF, data scientists can now use their preferred code base without compromising on data processing speed.

NVIDIA RTX AI hardware and technologies can also deliver data processing speedups. They include powerful GPUs that deliver the computational performance necessary to quickly and efficiently accelerate AI at every level — from data science workflows to model training and customization on PCs and workstations.

The Data Science Bottleneck

The most common data format is tabular data, which is organized in rows and columns. Smaller datasets can be managed with spreadsheet tools like Excel, however, datasets and modeling pipelines with tens of millions of rows typically rely on dataframe libraries in programming languages like Python.

Python is a popular choice for data analysis, primarily because of the pandas library, which features an easy-to-use application programming interface (API). However, as dataset sizes grow, pandas struggles with processing speed and efficiency in CPU-only systems. The library also notoriously struggles with text-heavy datasets, which is an important data type for large language models.

When data requirements outgrow pandas’ capabilities, data scientists are faced with a dilemma: endure slow processing timelines or take the complex and costly step of switching to more efficient but less user-friendly tools.

Accelerating Preprocessing Pipelines With RAPIDS cuDF 

RAPIDS cuDF speeds the popular pandas library up to 100x on RTX-powered AI PCs and workstations.

With RAPIDS cuDF, data scientists can use their preferred code base without sacrificing processing speed.

RAPIDS is an open-source suite of GPU-accelerated Python libraries designed to improve data science and analytics pipelines. cuDF is a GPU DataFrame library that provides a pandas-like API for loading, filtering and manipulating data.

Using cuDF’s “pandas accelerator mode,” data scientists can run their existing pandas code on GPUs to take advantage of powerful parallel processing, with the assurance that the code will switch to CPUs when necessary. This interoperability delivers advanced, reliable performance.

The latest release of cuDF supports larger datasets and billions of rows of tabular text data. This allows data scientists to use pandas code to preprocess data for generative AI use cases.

Accelerating Data Science on NVIDIA RTX-Powered AI Workstations and PCs

According to a recent study, 57% of data scientists use local resources such as PCs, desktops or workstations for data science.

Data scientists can achieve significant speedups starting with the NVIDIA GeForce RTX 4090 GPU. As datasets grow and processing becomes more memory-intensive, they can use cuDF to deliver up to 100x better performance with NVIDIA RTX 6000 Ada Generation GPUs in workstations, compared with traditional CPU-based solutions.

A chart show cuDF.pandas takes single-digit seconds, compared to multiple minutes on traditional pandas, to run the same operation.
Two common data science operations — “join” and “groupby” — are on the y-axis, while the x-axis shows the time it took to run each operation.

Data scientists can easily get started with RAPIDS cuDF on NVIDIA AI Workbench. This free developer environment manager powered by containers enables data scientists and developers to create, collaborate and migrate AI and data science workloads across GPU systems. Users can get started with several example projects available on the NVIDIA GitHub repository, such as the cuDF AI Workbench project.

cuDF is also available by default on HP AI Studio, a centralized data science platform designed to help AI developers seamlessly replicate their development environment from workstations to the cloud. This allows them to set up, develop and collaborate on projects without managing multiple environments.

The benefits of cuDF on RTX-powered AI PCs and workstations extend beyond raw performance speedups. It also:

  • Saves time and money with fixed-cost local development on powerful GPUs that replicates seamlessly to on-premises servers or cloud instances.
  • Enables faster data processing for quicker iterations, allowing data scientists to experiment, refine and derive insights from datasets at interactive speeds.
  • Delivers more impactful data processing for better model outcomes further down the pipeline.

Learn more about RAPIDS cuDF.

A New Era of Data Science

As AI and data science continue to evolve, the ability to rapidly process and analyze massive datasets will become a key differentiator to enable breakthroughs across industries. Whether for developing sophisticated machine learning models, conducting complex statistical analyses or exploring generative AI, RAPIDS cuDF provides the foundation for next-generation data processing.

NVIDIA is expanding that foundation by adding support for the most popular dataframe tools, including Polars, one of the fastest-growing Python libraries, which significantly accelerates data processing compared with other CPU-only tools out of the box.

Polars announced this month the open beta of the Polars GPU Engine, powered by RAPIDS cuDF. Polars users can now boost the performance of the already lightning-fast dataframe library by up to 13x.

Endless Possibilities for Tomorrow’s Engineers With RTX AI

NVIDIA GPUs — whether running in university data centers, GeForce RTX laptops or NVIDIA RTX workstations — are accelerating studies. Students in data science fields and beyond are enhancing their learning experience and gaining hands-on experience with hardware used widely in real-world applications.

Learn more about how NVIDIA RTX PCs and workstations help students level up their studies with AI-powered tools.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

High-Speed AI: Hitachi Rail Advances Real-Time Railway Analysis Using NVIDIA Technology

High-Speed AI: Hitachi Rail Advances Real-Time Railway Analysis Using NVIDIA Technology

Hitachi Rail, a global transportation company powering railway systems in over 50 countries, is integrating NVIDIA AI technology to lower maintenance costs for rail operators, reduce train idling time and improve transit reliability for passengers.

The company is adopting NVIDIA IGX — an industrial-grade, enterprise-level platform that delivers high-bandwidth sensor processing, powerful AI compute, functional safety capabilities and enterprise security — into its new HMAX platform to process sensor and camera data in real time.

By removing the lag time between data collection and analysis, the HMAX platform will enable Hitachi Rail clients to more quickly detect train tracks that need repair, monitor the degradation of overhead power lines and assess the health of trains and signaling equipment.

Hitachi Rail estimates that proactive maintenance costs around 7x less than emergency repairs done after infrastructure fails unexpectedly. Its existing AI monitoring systems are already reducing service delays by up to 20% and train maintenance costs by up to 15% — and are cutting down energy consumption by decreasing fuel costs at train depots by up to 40%.

With real-time analysis using NVIDIA IGX and NVIDIA Holoscan platform for sensor processing, the company aims to further increase these savings.

“Using previous digital monitoring systems, it would take a few days to process the data and discover issues that need attention,” said Koji Agatsuma, executive director and chief technology officer of rail vehicles at Hitachi Rail. “If we can instead conduct real-time prediction using NVIDIA technology, that enables us to avoid service disruptions and significantly improve safety, reliability and operating costs.”

NVIDIA IGX Powers Real-Time AI Engine

Building on its existing collection of HMAX applications — which are currently running on data from 8,000 train cars on 2,000 trains — Hitachi Rail has used NVIDIA IGX and the NVIDIA AI Enterprise software platform to create new accelerated AI applications to help operators monitor train fleets and infrastructure. NVIDIA AI Enterprise offers tools, pretrained models and application frameworks to streamline the development and deployment of production-grade AI applications.

These applications, available soon through the HMAX platform, can be used by the company’s international customer base to process huge quantities of data streaming from sensors onboard trains, taken from existing systems or imported from third-party software already in use by the customer.

In the U.K., for example, each Hitachi Rail train has sensors that report nearly 50,000 data points as frequently as every fifth of a second. AI infrastructure that keeps pace with this data flow can send train operators timely alerts when a component of a train or rail line needs maintenance. The AI insights can also be accessed through a chatbot interface, helping operators easily identify trends and opportunities to optimize maintenance schedules and more.

“If a potential issue isn’t identified and fixed promptly, it can result in a service disruption that causes significant economic loss for our customers and impacts the passengers who rely on these transit lines,” Agatsuma said. “NVIDIA AI infrastructure has enabled us to get immediate alerts on thousands of miles of railway for the first time, which we anticipate will reduce delays and disruptions to passenger travel.”

Driving Benefits Down the Track

The opportunities go beyond monitoring trains and tracks.

By mounting cameras atop trains, Hitachi Rail can monitor power lines overhead to identify degrading electric cables and help prevent disruptive failures. Traditionally, it takes up to 10 days to process one day’s worth of video data collected by the train. With NVIDIA-accelerated sensor processing, data can be processed in real time at the edge, sending only relevant information back to operational control centers for analysis and action.

Learn more about the NVIDIA IGX platform. 

Main image courtesy of Hitachi Rail.

Read More