Researchers Poised for Advances With NVIDIA CUDA Quantum

Researchers Poised for Advances With NVIDIA CUDA Quantum

Michael Kuehn and Davide Vodola are taking to new heights work that’s pioneering quantum computing for the world’s largest chemical company.

The BASF researchers are demonstrating how a quantum algorithm can see what no traditional simulation can — key attributes of NTA, a compound with applications that include removing toxic metals like iron from a city’s wastewater.

The quantum computing team at BASF simulated on GPUs how the equivalent of 24 qubits — the processing engines of a quantum computer — can tackle the challenge.

Many corporate R&D centers would consider that a major achievement, but they pressed on, and recently ran their first 60 qubit simulations on NVIDIA’s Eos H100 Supercomputer.

“It’s the largest simulation of a molecule using a quantum algorithm we’ve ever run,” said Kuehn.

Flexible, Friendly Software

BASF is running the simulation on NVIDIA CUDA Quantum, a platform for programming CPUs, GPUs and quantum computers, also known as QPUs.

Vodola described it as “very flexible and user friendly, letting us build up a complex quantum circuit simulation from relatively simple building blocks. Without CUDA Quantum, it would be impossible to run this simulation,” he said.

The work requires a lot of heavy lifting, too, so BASF turned to an NVIDIA DGX Cloud service that uses NVIDIA H100 Tensor Core GPUs.

“We need a lot of computing power, and the NVIDIA platform is significantly faster than CPU-based hardware for this kind of simulation,” said Kuehn.

BASF’s quantum computing initiative, which Kuehn helped launch, started in 2017. In addition to its work in chemistry, the team is developing use cases for quantum computing in machine learning as well as optimizations for logistics and scheduling.

An Expanding CUDA Quantum Community

Other research groups are also advancing science with CUDA Quantum.

At SUNY Stony Brook, researchers are pushing the boundaries of high-energy physics to simulate complex interactions of subatomic particles. Their work promises new discoveries in fundamental physics.

“CUDA Quantum enables us to do quantum simulations that would otherwise be impossible,” said Dmitri Kharzeev,  a SUNY professor and scientist at Brookhaven National Lab.

In addition, a research team at Hewlett Packard Labs is using the Perlmutter supercomputer to explore magnetic phase transition in quantum chemistry in one of the largest simulations of its kind. The effort could reveal important and unknown details of physical processes too difficult to model with conventional techniques.

“As quantum computers progress toward useful applications, high-performance classical simulations will be key for prototyping novel quantum algorithms,” said Kirk Bresniker, a chief architect at Hewlett Packard Labs. “Simulating and learning from quantum data are promising avenues toward tapping quantum computing’s potential.”

A Quantum Center for Healthcare

These efforts come as support for CUDA Quantum expands worldwide.

Classiq — an Israeli startup that already has more than 400 universities using its novel approach to writing quantum programs — announced today a new research center at the Tel Aviv Sourasky Medical Center, Israel’s largest teaching hospital.

Created in collaboration with NVIDIA, it will train experts in life science to write quantum applications that could someday help doctors diagnose diseases or accelerate the discovery of new drugs.

Classiq created quantum design software that automates low-level tasks, so developers don’t need to know all the complex details of how a quantum computer works. It’s now being integrated with CUDA Quantum.

Terra Quantum, a quantum services company with headquarters in Germany and Switzerland, is developing hybrid quantum applications for life sciences, energy, chemistry and finance that will run on CUDA Quantum. And IQM in Finland is enabling its superconducting QPU to use CUDA Quantum.

Quantum Loves Grace Hopper

Several companies, including Oxford Quantum Circuits, will use NVIDIA Grace Hopper Superchips to power their hybrid quantum efforts. Based in Reading, England, Oxford Quantum is using Grace Hopper in a hybrid QPU/GPU system programmed by CUDA Quantum.

Quantum Machines announced that the Israeli National Quantum Center will be the first deployment of NVIDIA DGX Quantum, a system using Grace Hopper Superchips. Based in Tel Aviv, the center will tap DGX Quantum to power quantum computers from Quantware, ORCA Computing and more.

In addition, Grace Hopper is being put to work by qBraid, in Chicago, to build a quantum cloud service, and Fermioniq, in Amsterdam, to develop tensor-network algorithms.

The large quantity of shared memory and the memory bandwidth of Grace Hopper make these superchips an excellent fit for memory-hungry quantum simulations.

Get started programming hybrid quantum systems today with the latest release of CUDA Quantum from NGC, NVIDIA’s catalog of accelerated software, or GitHub.

Read More

NVIDIA Grace Hopper Superchip Powers 40+ AI Supercomputers Across Global Research Centers, System Makers, Cloud Providers

NVIDIA Grace Hopper Superchip Powers 40+ AI Supercomputers Across Global Research Centers, System Makers, Cloud Providers

Dozens of new supercomputers for scientific computing will soon hop online, powered by NVIDIA’s breakthrough GH200 Grace Hopper Superchip for giant-scale AI and high performance computing.

The NVIDIA GH200 enables scientists and researchers to tackle the world’s most challenging problems by accelerating complex AI and HPC applications running terabytes of data.

At the SC23 supercomputing show, NVIDIA today announced that the superchip is coming to more systems worldwide, including from Dell Technologies, Eviden, Hewlett Packard Enterprise (HPE), Lenovo, QCT and Supermicro.

Bringing together the Arm-based NVIDIA Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C interconnect technology, GH200 also serves as the engine behind scientific supercomputing centers across the globe.

Combined, these GH200-powered centers represent some 200 exaflops of AI performance to drive scientific innovation.

HPE Cray Supercomputers Integrate NVIDIA Grace Hopper

At the show in Denver, HPE announced it will offer HPE Cray EX2500 supercomputers with the NVIDIA Grace Hopper Superchip. The integrated solution will feature quad GH200 processors, scaling up to tens of thousands of Grace Hopper Superchip nodes to provide organizations with unmatched supercomputing agility and quicker AI training. This configuration will also be part of a supercomputing solution for generative AI that HPE introduced today.

“Organizations are rapidly adopting generative AI to accelerate business transformations and technological breakthroughs,” said Justin Hotard, executive vice president and general manager of HPC, AI and Labs at HPE. “Working with NVIDIA, we’re excited to deliver a full supercomputing solution for generative AI, powered by technologies like Grace Hopper, which will make it easy for customers to accelerate large-scale AI model training and tuning at new levels of efficiency.”

Next-Generation AI Supercomputing Centers

A vast array of the world’s supercomputing centers are powered by NVIDIA Grace Hopper systems. Several top centers announced at SC23 that they’re now integrating GH200 systems for their supercomputers.

Germany’s Jülich Supercomputing Centre will use GH200 superchips in JUPITER, set to become the first exascale supercomputer in Europe. The supercomputer will help tackle urgent scientific challenges, such as mitigating climate change, combating pandemics and bolstering sustainable energy production.

Japan’s Joint Center for Advanced High Performance Computing — established between the Center for Computational Sciences at the University of Tsukuba and the Information Technology Center at the University of Tokyo — promotes advanced computational sciences integrated with data analytics, AI and machine learning across academia and industry. Its next-generation supercomputer will be powered by NVIDIA Grace Hopper.

The Texas Advanced Computing Center, based in Austin, Texas, designs and operates some of the world’s most powerful computing resources. The center will power its Vista supercomputer with NVIDIA GH200 for low power and high-bandwidth memory to deliver more computation while enabling bigger models to run with greater efficiency.

The National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign will tap NVIDIA Grace Hopper superchips to power DeltaAI, an advanced computing and data resource set to triple NCSA’s AI-focused computing capacity.

And, the University of Bristol recently received funding from the UK government to build Isambard-AI, set to be the country’s most powerful supercomputer, which will enable AI-driven breakthroughs in robotics, big data, climate research and drug discovery. The new system, being built by HPE, will be equipped with over 5,000 NVIDIA GH200 Grace Hopper Superchips, providing 21 exaflops of AI supercomputing power capable of making 21 quintillion AI calculations per second.

These systems join previously announced next-generation Grace Hopper systems from the Swiss National Supercomputing Centre, Los Alamos National Laboratory and SoftBank Corp.

GH200 Shipping Globally and Available in Early Access from CSPs

GH200 is available in early access from select cloud service providers such as Lambda and Vultr. Oracle Cloud Infrastructure today announced plans to offer GH200 instances, while CoreWeave detailed plans for early availability of its GH200 instances starting in Q1 2024.

Other system manufacturers such as ASRock Rack, ASUS, GIGABYTE and Ingrasys will begin shipping servers with the superchips by the end of the year.

NVIDIA Grace Hopper has been adopted in early access for supercomputing initiatives by more than 100 enterprises, organizations and government agencies across the globe, including the NASA Ames Research Center for aeronautics research and global energy company TotalEnergies.

In addition, the GH200 will soon become available through NVIDIA LaunchPad, which provides free access to enterprise NVIDIA hardware and software through an internet browser.

Learn more about Grace Hopper and other supercomputing breakthroughs by joining NVIDIA at SC23.

Read More

Scroll Back in Time: AI Deciphers Ancient Roman Riddles

Scroll Back in Time: AI Deciphers Ancient Roman Riddles

Thanks to a viral trend sweeping social media, we now know some men think about the Roman Empire every day.

And thanks to Luke Farritor, a 21-year-old computer science undergrad at the University of Nebraska-Lincoln, and like-minded AI enthusiasts, there might soon be a lot more to think about.

Blending a passion for history with machine learning skills, Farritor has triumphed in the Vesuvius Challenge, wielding the power of the NVIDIA GeForce GTX 1070 GPU to bring a snippet of ancient text back from the ashes after almost 2,000 years.

Text Big Thing: Deciphering Rome’s Hidden History

The Herculaneum scrolls are a library of ancient texts that were carbonized and preserved by the eruption of Mount Vesuvius in 79 AD, which buried the cities of Pompeii and Herculaneum under a thick layer of ash and pumice.

The competition, which has piqued the interest of historians and technologists across the globe, seeks to extract readable content from the carbonized remains of the scrolls.

In a significant breakthrough, the word “πορφυρας,” which means “purple dye” or “cloths of purple,” emerged from the ancient texts thanks to the efforts of Farritor.

The Herculaneum scrolls, wound about 100 times around, are sealed by the heat of the lava.
The Herculaneum scrolls, wound about 100 times around, are sealed by the heat of the eruption of Vesuvius.

His achievement in identifying 10 letters within a small patch of scroll earned him a $40,000 prize.

Close on his heels was Youssef Nader, a biorobotics graduate student, who independently discerned the same word a few months later, meriting a $10,000 prize.

Adding to these notable successes, Casey Handmer, an entrepreneur with a keen eye, secured another $10,000 for his demonstration that significant amounts of ink were waiting to be discovered within the unopened scrolls.

All these discoveries are advancing the work pioneered by W. Brent Seales, chair of the University of Kentucky Computer Science Department, who has dedicated over a decade to developing methods to digitally unfurl and read the delicate Herculaneum scrolls.

Turbocharging these efforts is Nat Friedman, the CEO of GitHub and the organizer of the Vesuvius Challenge, whose commitment to open-source innovation has fostered a community where such historical breakthroughs are possible.

To become the first to decipher text from the scrolls, Farritor, who served as an intern at SpaceX, harnessed the GeForce GTX 1070 to accelerate his work.

When Rome Meets RAM: Older GPU Helps Uncover Even Older Text

Introduced in 2016, the GTX 1070 is celebrated among gamers, who have long praised the GPU for its balance of performance and affordability.

Instead of gaming, however, Farritor harnessed the parallel processing capabilities of the GPU to accelerate the ResNet deep learning framework, processing data at speeds unattainable by traditional computing methods.

Farritor is not the only competitor harnessing NVIDIA GPUs, which have proven themselves as indispensable tools to Vesuvius challenge competitors.

Latin Lingo and Lost Text

Discovered in the 18th century in the Villa of the Papyri, the Herculaneum scrolls have presented a challenge to researchers. Their fragile state has made them nearly impossible to read without causing damage. The advent of advanced imaging and AI technology changed that.

The project has become a passion for Farritor, who finds himself struggling to recall more of the Latin he studied in high school. “And man, like what’s in the scrolls … it’s just the anticipation, you know?” Farritor said.

The next challenge is to unearth passages from the Herculaneum scrolls that are 144 characters long, echoing the brevity of an original Twitter post.

Engaging over 1,500 experts in a collaborative effort, the endeavor is now more heated than ever.

Private donors have upped the ante, offering a $700,000 prize for those who can retrieve four distinct passages of at least 140 characters this year — a testament to the value placed on these ancient texts and the lengths required to reclaim them.

And Farritor’s eager to keep digging, reeling off the names of lost works of Roman and Greek history that he’d love to help uncover.

He reports he’s now thinking about Rome — and what his efforts might help discover — not just every day now, but “every hour.” “I think anything that sheds light on that time in human history is gonna be significant,” Farritor said.

Read More

Enter a World of Samurai and Demons: GFN Thursday Brings Capcom’s ‘Onimusha: Warlords’ to the Cloud

Enter a World of Samurai and Demons: GFN Thursday Brings Capcom’s ‘Onimusha: Warlords’ to the Cloud

Wield the blade and embrace the way of the samurai for some thrilling action — Onimusha: Warlords comes to GeForce NOW this week. Members can experience feudal Japan in this hack-and-slash adventure game in the cloud.

It’s part of an action-packed GFN Thursday, with 16 more games joining the cloud gaming platform’s library.

Forging Destinies

Vengeance is mine.

Capcom’s popular Onimusha: Warlords is newly supported in the cloud this week, just in time for those tuning into the recently released Netflix anime adaptation.

Fight against the evil warlord Nobunaga Oda and his army of demons as samurai Samanosuke Akechi. Explore feudal Japan, wield swords, use ninja techniques and solve puzzles to defeat enemies. The action-adventure hack-and-slash game has been enhanced with improved controls for smoother swordplay mechanics, an updated soundtrack and more.

Ultimate members can stream the game in ultrawide resolution with up to eight hours each gaming session for riveting samurai action.

Endless Games

Endless Dungeons on GeForce NOW
Monsters, dangers, secrets and treasures, oh my!

Roguelite fans and GeForce NOW members have been enjoying Sega’s Endless Dungeon in the cloud. Recruit a team of shipwrecked heroes, plunge into a long-abandoned space station and protect the crystal against never-ending waves of monsters. Never accept defeat — get reloaded and try, try again.

On top of that, check out the 16 newly supported games joining the GeForce NOW library this week:

  • The Invincible (New release on Steam, Nov. 6)
  • Roboquest (New release on Steam, Nov. 7)
  • Stronghold: Definitive Edition (New release on Steam, Nov. 7)
  • Dungeons 4 (New release on Steam, Xbox and available on PC Game Pass, Nov. 9)
  • Space Trash Scavenger (New release on Steam, Nov. 9)
  • Airport CEO (Steam)
  • Car Mechanic Simulator 2021 (Xbox, available on PC Game Pass)
  • Farming Simulator 19 (Xbox, available on Microsoft Store)
  • GoNNER (Xbox, available on Microsoft Store)
  • GoNNER2 (Xbox, available on Microsoft Store)
  • Jurassic World Evolution 2 (Xbox, available on PC Game Pass)
  • Onimusha: Warlords (Steam)
  • Planet of Lana (Xbox, available on PC Game Pass)
  • Q.U.B.E. 10th Anniversary (Epic Games Store)
  • Trailmakers (Xbox, available on PC Game Pass)
  • Turnip Boy Commits Tax Evasion (Epic Games Store)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

Read More

Acing the Test: NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

Acing the Test: NVIDIA Turbocharges Generative AI Training in MLPerf Benchmarks

NVIDIA’s AI platform raised the bar for AI training and high performance computing in the latest MLPerf industry benchmarks.

Among many new records and milestones, one in generative AI stands out: NVIDIA Eos — an AI supercomputer powered by a whopping 10,752 NVIDIA H100 Tensor Core GPUs and NVIDIA Quantum-2 InfiniBand networking — completed a training benchmark based on a GPT-3 model with 175 billion parameters trained on one billion tokens in just 3.9 minutes.

That’s a nearly 3x gain from 10.9 minutes, the record NVIDIA set when the test was introduced less than six months ago.

NVIDIA H100 training results over time on MLPerf benchmarks

The benchmark uses a portion of the full GPT-3 data set behind the popular ChatGPT service that, by extrapolation, Eos could now train in just eight days, 73x faster than a prior state-of-the-art system using 512 A100 GPUs.

The acceleration in training time reduces costs, saves energy and speeds time-to-market. It’s heavy lifting that makes large language models widely available so every business can adopt them with tools like NVIDIA NeMo, a framework for customizing LLMs.

In a new generative AI test ‌this round, 1,024 NVIDIA Hopper architecture GPUs completed a training benchmark based on the Stable Diffusion text-to-image model in 2.5 minutes, setting a high bar on this new workload.

By adopting these two tests, MLPerf reinforces its leadership as the industry standard for measuring AI performance, since generative AI is the most transformative technology of our time.

System Scaling Soars

The latest results were due in part to the use of the most accelerators ever applied to an MLPerf benchmark. The 10,752 H100 GPUs far surpassed the scaling in AI training in June, when NVIDIA used 3,584 Hopper GPUs.

The 3x scaling in GPU numbers delivered a 2.8x scaling in performance, a 93% efficiency rate thanks in part to software optimizations.

Efficient scaling is a key requirement in generative AI because LLMs are growing by an order of magnitude every year. The latest results show NVIDIA’s ability to meet this unprecedented challenge for even the world’s largest data centers.

Chart of near linear scaling of H100 GPUs on MLPerf training

The achievement is thanks to a full-stack platform of innovations in accelerators, systems and software that both Eos and Microsoft Azure used in the latest round.

Eos and Azure both employed 10,752 H100 GPUs in separate submissions. They achieved within 2% of the same performance, demonstrating the efficiency of NVIDIA AI in data center and public-cloud deployments.

Chart of record Azure scaling in MLPerf training

NVIDIA relies on Eos for a wide array of critical jobs. It helps advance initiatives like NVIDIA DLSS, AI-powered software for state-of-the-art computer graphics and NVIDIA Research projects like ChipNeMo, generative AI tools that help design next-generation GPUs.

Advances Across Workloads

NVIDIA set several new records in this round in addition to making advances in generative AI.

For example, H100 GPUs were 1.6x faster than the prior-round training recommender models widely employed to help users find what they’re looking for online. Performance was up 1.8x on RetinaNet, a computer vision model.

These increases came from a combination of advances in software and scaled-up hardware.

NVIDIA was once again the only company to run all MLPerf tests. H100 GPUs demonstrated the fastest performance and the greatest scaling in each of the nine benchmarks.

List of six new NVIDIA records in MLPerf training

Speedups translate to faster time to market, lower costs and energy savings for users training massive LLMs or customizing them with frameworks like NeMo for the specific needs of their business.

Eleven systems makers used the NVIDIA AI platform in their submissions this round, including ASUS, Dell Technologies, Fujitsu, GIGABYTE, Lenovo, QCT and Supermicro.

NVIDIA partners participate in MLPerf because they know it’s a valuable tool for customers evaluating AI platforms and vendors.

HPC Benchmarks Expand

In MLPerf HPC, a separate benchmark for AI-assisted simulations on supercomputers, H100 GPUs delivered up to twice the performance of NVIDIA A100 Tensor Core GPUs in the last HPC round. The results showed up to 16x gains since the first MLPerf HPC round in 2019.

The benchmark included a new test that trains OpenFold, a model that predicts the 3D structure of a protein from its sequence of amino acids. OpenFold can do in minutes vital work for healthcare that used to take researchers weeks or months.

Understanding a protein’s structure is key to finding effective drugs fast because most drugs act on proteins, the cellular machinery that helps control many biological processes.

In the MLPerf HPC test, H100 GPUs trained OpenFold in 7.5 minutes.  The OpenFold test is a representative part of the entire AlphaFold training process that two years ago took 11 days using 128 accelerators.

A version of the OpenFold model and the software NVIDIA used to train it will be available soon in NVIDIA BioNeMo, a generative AI platform for drug discovery.

Several partners made submissions on the NVIDIA AI platform in this round. They included Dell Technologies and supercomputing centers at Clemson University, the Texas Advanced Computing Center and — with assistance from Hewlett Packard Enterprise (HPE) — Lawrence Berkeley National Laboratory.

Benchmarks With Broad Backing

Since its inception in May 2018, the MLPerf benchmarks have enjoyed broad backing from both industry and academia. Organizations that support them include Amazon, Arm, Baidu, Google, Harvard, HPE, Intel, Lenovo, Meta, Microsoft, NVIDIA, Stanford University and the University of Toronto.

MLPerf tests are transparent and objective, so users can rely on the results to make informed buying decisions.

All the software NVIDIA used is available from the MLPerf repository, so all developers can get the same world-class results. These software optimizations get continuously folded into containers available on NGC, NVIDIA’s software hub for GPU applications.

Learn more about MLPerf and the details of this round.

Read More

NVIDIA Partners With APEC Economies to Change Lives, Increase Opportunity, Improve Outcomes

NVIDIA Partners With APEC Economies to Change Lives, Increase Opportunity, Improve Outcomes

When patients in Vietnam enter a medical facility in distress, doctors use NVIDIA technology to get more accurate scans to diagnose their ailments. In Hong Kong, a different set of doctors leverage generative AI to discover new cures for patients.

Improving the health and well-being of citizens and strengthening economies and communities are key themes as world leaders soon gather in San Francisco for the 2023 Asia-Pacific Economic Cooperation (APEC) Summit.

When they meet to discuss bold solutions to improve the lives of their citizens and societies, NVIDIA’s AI and accelerated computing initiatives are a crucial enabler.

NVIDIA’s work to improve outcomes for everyday people while tackling future challenges builds on years of deep investment with APEC partners. With a strong presence in countries across the region, including a workforce of thousands and numerous collaborative projects in areas from farming to healthcare to education, NVIDIA is delivering new technologies and workforce training programs to enhance industrial development and advance generative AI research.

Beyond technological advancements, these efforts spur economic growth, create good-paying jobs and improve the health and well-being of people globally.

Research and National Compute Partnerships

NVIDIA has advanced AI research partnerships with several APEC economies. These accelerate scientific breakthroughs in AI and HPC to address national challenges, such as healthcare, skills development and creating more robust local AI ecosystems to protect and advance well-being, prosperity and security. For example:

  • Australia’s national science and research organization, CSIRO, has teamed with NVIDIA to advance Australia’s AI program across climate action, space exploration, quantum computing and AI education.
  • Singapore’s National Supercomputing Centre and Ministry of Education have partnered with NVIDIA to drive sovereign AI capabilities with a priority focus on sectors such as healthcare, climate science and digital twins.
  • Thailand was Southeast Asia’s first country to participate in NVIDIA’s AI Nations initiative, bringing together the Ministry of Education with a consortium of top universities to advance public-private collaborations in urban planning, public health and autonomous vehicles.
  • In Vietnam, NVIDIA is partnering with Viettel,  the nation’s largest employer, and Vietnam’s Academy for Science & Technology to upskill workforces, accelerate the introduction of AI services to industry and deploy next-generation 5G services.

Innovation Ecosystems

Startups are at the leading edge of AI innovation, and a robust startup ecosystem is vital to advancing technology within APEC economies.

NVIDIA Inception is a free program to help startups innovate faster. Through it, NVIDIA supports over 5,000 startups across APEC economies, and more than 15,000 globally, by providing cutting-edge technology, connections with venture capitalists and access to the latest technical resources.

In 2023, NVIDIA added nearly 1,000 APEC-area startups to the program. In addition to creating economic opportunities, Inception supports small- and medium-sized enterprises in developing novel solutions to some of society’s biggest challenges. Here’s what some of its members are doing:

  • In Malaysia, Tapway uses AI to reduce congestion and streamline traffic for more than 1 million daily travelers.
  • In New Zealand, Lynker uses geospatial analysis, deep learning and remote sensing for earth observation.  Lynker’s technology measures carbon sequestration on farms, detecting, monitoring and restoring wetlands and enabling more effective disaster relief.
  • In Thailand, AltoTech Global, an Inception partner, integrates AI software with Internet of Things devices to optimize energy consumption for hotels, buildings, factories and smart cities. AltoTech’s ultimate goal is contributing to the net-zero economy and helping customers achieve their net-zero targets.

Digital Upskilling and Tools for Growth

The NVIDIA Deep Learning Institute (DLI) provides AI training and digital upskilling programs that cultivate innovation and create economic opportunities.

DLI’s training and certification program helps individuals and organizations accelerate skills development and workforce transformation in AI, high performance computing and industrial digitalization.

Hands-on, self-paced and instructor-led courses are created and taught by NVIDIA experts, bringing real-world experience and deep technical know-how to developers and IT professionals.

Through this program, NVIDIA has trained more than 115,000 individuals in APEC economies, including more than 16,000 new trainees this year.

Separately, the NVIDIA Developer Program offers more than 2 million developers in APEC economies access to software development kits, application program interfaces, pretrained AI models and performance analysis tools to help developers create and innovate. Members receive free hands-on training, access to developer forums and early access to new products and services.

Creating a Better Future for All

As nations work together to address common challenges and improve the lives of their citizens, NVIDIA will continue to leverage its world-class technologies to help create a better world for all.

Read More

Dr Aengus Tran, co-founder of Annalise.ai and Harrison.ai on Using AI as a Spell Check for Health Checks

Dr Aengus Tran, co-founder of Annalise.ai and Harrison.ai on Using AI as a Spell Check for Health Checks

Clinician-led healthcare AI company Harrison.ai has built an AI system that effectively serves as a “spell checker” for radiologists — flagging critical findings to improve the speed and accuracy of radiology image analysis, reducing misdiagnoses.

In the latest episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Harrison.ai CEO and cofounder Aengus Tran about the company’s mission to scale global healthcare capacity with autonomous AI systems.

Harrison.ai’s initial product, annalise.ai, is an AI tool that automates radiology image analysis to enable faster, more accurate diagnoses. It can produce 124-130 different possible diagnoses and flag key findings to aid radiologists in their final diagnosis. Currently, annalise.ai works for chest X-rays and brain CT scans, with more on the way.

While an AI designed for categorizing traffic lights, for example, doesn’t need perfection,  medical tools must be highly accurate — any oversight could be fatal. To overcome this challenge, annalise.ai was trained on millions of meticulously annotated images — some were annotated three to five times over before being used for training.

Harrison.ai is also developing Franklin.ai, a sibling AI tool aimed to accelerate and improve the accuracy of histopathology diagnosis — in which a clinician performs a biopsy and inspects the tissue for the presence of cancerous cells. Similarly to annalise.ai, Franklin.ai flags critical findings to assist pathologists in speeding and increasing the accuracy of diagnoses.

Ethical concerns about AI use are ever-rising, but for Tran, the concern is less about whether it’s ethical to use AI for medical diagnosis but “actually the converse: Is it ethical to not use AI for medical diagnosis,” especially if “humans using those AI systems simply pick up more misdiagnosis, pick up more cancer and conditions?”

Tran also talked about the future of AI systems and suggested that the focus is dual: first, focus on improving pree-xisting systems and then think of new cutting-edge solutions.

And for those looking to break into careers in AI and healthcare, Tran says that the “first step is to decide upfront what problems you’re willing to spend a huge part of your time solving first, before the AI part,” emphasizing that the “first thing is actually to fall in love with some problem.”

You Might Also Like

Jules Anh Tuan Nguyen Explains How AI Lets Amputee Control Prosthetic Hand, Video Games
A postdoctoral researcher at the University of Minnesota discusses his efforts to allow amputees to control their prosthetic limb — right down to the finger motions — with their minds.

Overjet’s Ai Wardah Inam on Bringing AI to Dentistry
Overjet, a member of NVIDIA Inception, is moving fast to bring AI to dentists’ offices. Dr. Wardah Inam, CEO of the company, discusses using AI to improve patient care.

Immunai CTO and Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs
Luis Voloch talks about tackling the challenges of the immune system with a machine learning and data science mindset.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better. Have a few minutes to spare? Fill out this listener survey.

Read More

Digital Artist Steven Tung Shows Off So-fish-ticated Style This Week ‘In the NVIDIA Studio’

Digital Artist Steven Tung Shows Off So-fish-ticated Style This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep-diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Taiwanese artist Steven Tung creates captivating 2D and 3D digital art that explores sci-fi, minimalism and realism and pushes artistic boundaries.

This week In the NVIDIA Studio, Tung shares the inspiration and creative workflow behind his whimsical animation, The Given Fish.

Professional-grade technology, which was once available only at select special effects studios, is becoming increasingly accessible.

“Visual production capabilities continue to skyrocket, generating a growing demand for better computer hardware among the general public,” Tung said. “The evolving synergy between art and technology can spark endless possibilities for creators.”

Tung uses an MSI MEG Trident X2 desktop, powered by GeForce RTX 4090 graphics, to accelerate his creative workflow.

The MSI MEG Trident X2 desktop, powered by GeForce RTX 4090 graphics.

“The enhanced speed and performance expedites various processes, such as updating material textures in Adobe Substance 3D Painter and rendering in Blender,” said Tung. “The necessary specifications and requirements align, enabling maximum creativity without limitations.”

Exquisite Visuals Made E-fish-ciently

Tung’s 3D animation, The Given Fish, may look simple at first glance — but it’s surprisingly complex.

“GeForce RTX GPUs are indispensable hardware for 3D rendering tasks. Faster speeds bring significant benefits in production efficiency and time saved.” — Steven Tung

In the creative world behind the animation, the stone fish depicted can be consumed by people. The concept is that once taken out of the aquarium, the stone fish transforms into a real, living one.

“I have a strong desire to have an aquarium at home, but it’s not practical,” said Tung. “The next best thing is to turn that emotion into art.”

Tung began by creating concept sketches in Adobe Photoshop, where he had access to over 30 GPU-accelerated features that could help modify and adjust his canvas and maximize his efficiency.

Concept art for “The Given Fish.”

Next, Tung jumped from 2D to 3D with ZBrush. He first built a basic model and then refined critical details with custom brushes — adding greater depth and dimension with authentic, hand-sculpted textures.

Advanced sculpting in ZBrush.

He then used the UV unwrapping feature in RizomUV to ensure that his models were properly unwrapped and ready for texture application.

UV unwrapping feature in RizomUV.

Tung imported the models into Adobe 3D Substance Painter, where he meticulously painted textures, blended materials and used the built-in library to achieve lifelike stone textures. RTX-accelerated light and ambient occlusion baking optimized his assets in seconds.

Applying textures in Adobe Substance 3D Painter.

To bring all the elements together, Tung imported the models and materials into Blender. He set up texture channels, assigned texture files and assembled the models so that they would be true to the compositions outlined in the initial sketch.

Achieving realistic stone textures in Adobe 3D Substance Painter.

Next, Tung used Blender Cycles to light and render the scene.

Composition edits in Blender.

Blender Cycles’ RTX-accelerated, AI-powered OptiX ray tracing enabled interactive, photorealistic movement in the viewport and sped up animation work — all powered by his GeForce RTX 4090 GPU-equipped system.

Animation work in Blender.

RTX accelerated OptiX ray tracing in Blender Cycles enabled the fastest final frame render.

Digital artist Steven Tung.

Check out Tung’s portfolio on Instagram.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

‘Starship for the Mind’: University of Florida Opens Malachowsky Hall, an Epicenter for AI and Data Science

‘Starship for the Mind’: University of Florida Opens Malachowsky Hall, an Epicenter for AI and Data Science

Embodying the convergence of AI and academia, the University of Florida Friday inaugurated the Malachowsky Hall for Data Science & Information Technology.

The sleek, seven-story building is poised to play a pivotal role in UF’s ongoing efforts to harness the transformative power of AI, reaffirming its stature as one of the nation’s leading public universities.

Evoking Apple co-founder Steve Jobs’ iconic description of a personal computer, NVIDIA’s founder and CEO Jensen Huang described Malachowsky Hall — named for NVIDIA co-founder Chris Malachowsky — and the HiPerGator AI supercomputer it hosts as a “starship for knowledge discovery.”

“Steve Jobs called (the PC) ‘the bicycle of the mind,’ a device that propels our thoughts further and faster,” Huang said.

“What Chris Malachowsky has gifted this institution is nothing short of the ‘starship of the mind’ — a vehicle that promises to take our intellect to uncharted territories,” Huang said.

The inauguration of the 260,000-square-foot structure marks a milestone in the partnership between UF alum Malachowsky, NVIDIA and the state of Florida — a collaboration that has propelled UF to the forefront of AI innovation.

Malachowsky and NVIDIA both made major contributions toward its construction, bolstered by a $110 million investment from the state of Florida.

University of Florida President Ben Sasse and NVIDIA CEO Jensen Huang speak at the opening of Malachowsky Hall.
University of Florida President Ben Sasse and NVIDIA CEO Jensen Huang speak at the opening of Malachowsky Hall.

Following the opening, Huang and UF’s new president, Ben Sasse, met to discuss the impact of AI and data science across UF and beyond for students just starting their careers.

Sasse underscored the importance of adaptability in a rapidly changing world, telling the audience: “work in lots and lots of different organizations … because lifelong work in any one, not just firm, but any one industry is going to end in our lives. You’re ultimately going to have to figure out how to reinvent yourselves at 30, 35, 40 and 45.”

Huang offered students very different advice, recalling how he met his wife, Lori, who was in the audience, as an undergraduate. “Have a good pickup line … do you want to know the pickup line?” Huang asked, pausing a beat. “You want to see my homework?”

The spirit of Sasse and Huang’s adaptable approach to career and personal development is embodied in Malachowsky Hall, designed to bring together people from academia and industry, research and government.

Packed with innovative collaboration spaces and labs, the hall features a spacious 400-seat auditorium, dedicated high-performance computing study spaces and a rooftop terrace that unveils panoramic campus vistas.

Sustainability is woven into its design, highlighted by energy-efficient systems and rainwater harvesting facilities.

Malachowsky Hall will serve as a conduit to bring the on-campus advances in AI to Florida’s thriving economy, which continues to outpace the nation in jobs and GDP growth.

Together, NVIDIA founder and UF alumnus Chris Malachowsky and NVIDIA donated $50 million toward the University of Florida’s HiPerGator AI supercomputer.
Together, NVIDIA founder and UF alumnus Chris Malachowsky and NVIDIA donated $50 million toward the University of Florida’s HiPerGator AI supercomputer.

UF’s efforts to bring AI and academia together, catalyzed by support from Malachowsky and NVIDIA, go far beyond Malachowsky Hall.

In 2020, UF announced that Malachowsky and NVIDIA together donated $50 million toward HiPerGator, one of the most powerful AI supercomputers in the country.

With additional state support, UF recently added more than 110 AI faculty members to the 300 already engaged in AI teaching and research.

As a result, UF extended AI-focused courses, workshops and projects across the university, enabling its 55,000 students to delve into AI and its interdisciplinary applications.

Friday’s ribbon-cutting will open exciting new opportunities for the university, its students and the state of Florida to realize the potential of AI innovations across sectors.

Huang likened pursuing knowledge through AI to embarking on a “starship.” “You’ve got to go as far as you can,” he urged students.

For a deeper exploration of Malachowsky Hall and UF’s groundbreaking AI initiatives, visit UF’s website.

Read More

How AI-Based Cybersecurity Strengthens Business Resilience

How AI-Based Cybersecurity Strengthens Business Resilience

The world’s 5 billion internet users and nearly 54 billion devices generate 3.4 petabytes of data per second, according to IDC. As digitalization accelerates, enterprise IT teams are under greater pressure to identify and block incoming cyber threats to ensure business operations and services are not interrupted — and AI-based cybersecurity provides a reliable way to do so.

Few industries appear immune to cyber threats. This year alone, international hotel chains, financial institutions, Fortune 100 retailers, air traffic-control systems and the U.S. government have all reported threats and intrusions.

Whether from insider error, cybercriminals, hacktivists or other threats, risks in the cyber landscape can damage an enterprise’s reputation and bottom line. A breach can paralyze operations, jeopardize proprietary and customer data, result in regulatory fines and destroy customer trust.

Using AI and accelerated computing, businesses can reduce the time and operational expenses required to detect and block cyber threats while freeing up resources to focus on core business value operations and revenue-generating activities.

Here’s a look at how industries are applying AI techniques to safeguard data, enable faster threat detection and mitigate attacks to ensure the consistent delivery of service to customers and partners.

Public Sector: Protecting Physical Security, Energy Security and Citizen Services

AI-powered analytics and automation tools are helping government agencies provide citizens with instant access to information and services, make data-driven decisions, model climate change, manage natural disasters, and more. But public entities managing digital tools and infrastructure face a complex cyber risk environment that includes regulatory compliance requirements, public scrutiny, large interconnected networks and the need to protect sensitive data and high-value targets.

Adversary nation-states may initiate cyberattacks to disrupt networks, steal intellectual property or swipe classified government documents. Internal misuse of digital tools and infrastructure combined with sophisticated external espionage places public organizations at high risk of data breach. Espionage actors have also been known to recruit inside help, with 16% of public administration breaches showing evidence of collusion. To protect critical infrastructure, citizen data, public records and other sensitive information, federal organizations are turning to AI.

The U.S. Department of Energy’s (DOE) Office of Cybersecurity, Energy Security and Emergency Response (CESER) is tasked with strengthening the resilience of the country’s energy sector by addressing emerging threats and improving energy infrastructure security. The DOE-CESER has invested more than $240 million in cybersecurity research, development and demonstration projects since 2010.

In one project, the department developed a tool that uses AI to automate and optimize security vulnerability and patch management in energy delivery systems. Another project for artificial diversity and defense security uses software-defined networks to enhance the situational awareness of energy delivery systems, helping ensure uninterrupted flows of energy.

The Defense Advanced Research Projects Agency (DARPA), which is charged with researching and investing in breakthrough technologies for national security, is using machine learning and AI in several areas. The DARPA CASTLE program trains AI to defend against advanced, persistent cyber threats. As part of the effort, researchers intend to accelerate cybersecurity assessments with approaches that are automated, repeatable and measurable. The DARPA GARD program builds platforms, libraries, datasets and training materials to help developers build AI models that are resistant to deception and adversarial attacks.

To keep up with an evolving threat landscape and ensure physical security, energy security and data security, public organizations must continue integrating AI to achieve a dynamic, proactive and far-reaching cyber defense posture.

Financial Services: Securing Digital Transactions, Payments and Portfolios 

Banks, asset managers, insurers and other financial service organizations are using AI and machine learning to deliver superior performance in fraud detection, portfolio management, algorithmic trading and self-service banking.

With constant digital transactions, payments, loans and investment trades, financial service institutions manage some of the largest, most complex and most sensitive datasets of any industry. Behind only the healthcare industry, these organizations suffer the second highest cost of a data breach, at nearly $6 million per incident. This cost grows if regulators issue fines or if recovery includes legal fees and lawsuit settlements. Worse still, lost business may never be recovered if trust can’t be repaired.

Banks and financial institutions use AI to improve insider threat detection, detect phishing and ransomware, and keep sensitive information safe.

FinSec Innovation Lab, a joint venture by Mastercard and Enel X, is using AI to help its customers defend against ransomware. Prior to working with FinSec, one card-processing customer suffered a LockBit ransomware attack in which 200 company servers were infected in just 1.5 hours. The company was forced to shut down servers and suspend operations, resulting in an estimated $7 million in lost business.

FinSec replicated this attack in its lab but deployed the NVIDIA Morpheus cybersecurity framework, NVIDIA DOCA software framework for intrusion detection and NVIDIA BlueField DPU computing clusters. With this mix of AI and accelerated computing, FinSec was able to detect the ransomware attack in less than 12 seconds, quickly isolate virtual machines and recover 80% of the data on infected servers. This type of real-time response helps businesses avoid service downtime and lost business while maintaining customer trust.

With AI to help defend against cyberattacks, financial institutions can identify intrusions and anticipate future threats to keep financial records, accounts and transactions secure.

Retail: Keeping Sales Channels and Payment Credentials Safe

Retailers are using AI to power personalized product recommendations, dynamic pricing and customized marketing campaigns. Multichannel digital platforms have made in-store and online shopping more convenient: up to 48% of consumers save a card on file with a merchant, significantly boosting card-not-present transactions. While digitization has brought convenience, it has also made sensitive data more accessible to attackers.

Sitting on troves of digital payment credentials for millions of customers, retailers are a prime target for cybercriminals looking to take advantage of security gaps. According to a recent Data Breach Investigations Report from Verizon, 37% of confirmed data disclosures in the retail industry resulted in stolen payment card data.

Malware attacks, ransomware and distributed denial of service attacks are all on the rise, but phishing remains the favored vector for an initial attack. With a successful phishing intrusion, criminals can steal credentials, access systems and launch ransomware.

Best Buy manages a network of more than 1,000 stores across the U.S. and Canada. With multichannel digital sales across both countries, protecting consumer information and transactions is critical. To defend against phishing and other cyber threats, Best Buy began using customized machine learning and NVIDIA Morpheus to better secure their infrastructure and inform their security analysts.

After deploying this AI-based cyber defense, the retail giant improved the accuracy of phishing detection to 96% while reducing false-positive rates. With a proactive approach to cybersecurity, Best Buy is protecting its reputation as a tech expert focused on customer needs.

From complex supply chains to third-party vendors and multichannel point-of-sale networks, expect retailers to continue integrating AI to protect operations as well as critical proprietary and customer data.

Smart Cities and Spaces: Protecting Critical Infrastructure and Transit Networks

IoT devices and AI that analyze movement patterns, traffic and hazardous situations have great potential to improve the safety and efficiency of spaces and infrastructure. But as airports, shipping ports, transit networks and other smart spaces integrate IoT and use data, they also become more vulnerable to attack.

In the last couple of years, there have been distributed denial of service (DDoS) attacks on airports and air traffic control centers and ransomware attacks on seaports, city municipalities, police departments and more. Attacks can paralyze information systems, ground flights, disrupt the flow of cargo and traffic, and delay the delivery of goods to markets. Hostile attacks could have far more serious consequences, including physical harm or loss of life.

In connected spaces, AI-driven security can analyze vast amounts of data to predict threats, isolate attacks and provide rapid self-healing after an intrusion. AI algorithms trained on emails can halt threats in the inbox and block phishing attempts like those that delivered ransomware to seaports earlier this year. Machine learning can be trained to recognize DDoS attack patterns to prevent the type of incoming malicious traffic that brought down U.S. airport websites last year.

Organizations adopting smart space technology, such as the Port of Los Angeles, are making efforts to get ahead of the threat landscape. In 2014, the Port of LA established a cybersecurity operations center and hired a dedicated cybersecurity team. In 2021, the port followed up with a cyber resilience center to enhance early-warning detection for cyberattacks that have the potential to impact the flow of cargo.

The U.S. Federal Aviation Administration has developed an AI certification framework that assesses the trustworthiness of AI and ML applications. The FAA also implements a zero-trust cyber approach, enforces strict access control and runs continuous verification across its digital environment.

By bolstering cybersecurity and integrating AI, smart space and transport infrastructure administrators can offer secure access to physical spaces and digital networks to protect the uninterrupted movement of people and goods.

Telecommunications: Ensure Network Resilience and Block Incoming Threats

Telecommunications companies are leaning into AI to power predictive maintenance and maximum network uptime, network optimization, equipment troubleshooting, call-routing and self-service systems.

The industry is responsible for critical national infrastructure in every country, supports over 5 billion customer endpoints and is expected to constantly deliver above 99% reliability. As reliance on cloud, IoT and edge computing expands and 5G becomes the norm, immense digital surface areas must be protected from misuse and malicious attack.

Telcos can deploy AI to ensure the security and resilience of networks. AI can monitor IoT devices and edge networks to detect anomalies and intrusions, identify fake users, mitigate attacks and quarantine infected devices. AI can continuously assess the trustworthiness of devices, users and applications, thereby shortening the time needed to identify fraudsters.

Pretrained AI models can be deployed to protect 5G networks from threats such as malware, data exfiltration and DOS attacks.

Using deep learning and NVIDIA BlueField DPUs, Palo Alto Networks has built a next-generation firewall addressing 5G needs, maximizing cybersecurity performance while maintaining a small infrastructure footprint. The DPU powers accelerated intelligent network filtering to parse, classify and steer traffic to improve performance and isolate threats. With more efficient computing that deploys fewer servers, telcos can maximize return on investment for compute investments and minimize digital attack surface areas.

By putting AI to work, telcos can build secure, encrypted networks to ensure network availability and data security for both individual and enterprise customers.

Automotive: Insulate Vehicle Software From Outside Influence and Attack 

Modern cars rely on complex AI and ML software stacks running on in-vehicle computers to process data from cameras and other sensors. These vehicles are essentially giant, moving IoT devices — they perceive the environment, make decisions, advise drivers and even control the vehicle with autonomous driving features.

Like other connected devices, autonomous vehicles are susceptible to various types of cyberattacks. Bad actors can infiltrate and compromise AV software both on board and from third-party providers. Denial of service attacks can disrupt over-the-air software updates that vehicles rely on to operate safely. Unauthorized access to communications systems like onboard WiFi, Bluetooth or RFID can expose vehicle systems to the risk of remote manipulation and data theft. This can jeopardize geolocation and sensor data, operational data, driver and passenger data, all of which are crucial to functional safety and the driving experience.

AI-based cybersecurity can help monitor in-car and network activities in real time, allowing for rapid response to threats. AI can be deployed to secure and authenticate over-the-air updates to prevent tampering and ensure the authenticity of software updates. AI-driven encryption can protect data transmitted over WiFi, Bluetooth and RFID connections. AI can also probe vehicle systems for vulnerabilities and take remedial steps.

Ranging from AI-powered access control to unlock and start vehicles to detecting deviations in sensor performance and patching security vulnerabilities, AI will play a crucial role in the safe development and deployment of autonomous vehicles on our roads.

Keeping Operations Secure and Customers Happy With AI Cybersecurity 

By deploying AI to protect valuable data and digital operations, industries can focus their resources on innovating better products, improving customer experiences and creating new business value.

NVIDIA offers a number of tools and frameworks to help enterprises swiftly adjust to the evolving cyber risk environment. The NVIDIA Morpheus cybersecurity framework provides developers and software vendors with optimized, easy-to-use tools to build solutions that can proactively detect and mitigate threats while drastically reducing the cost of cyber defense operations. To help defend against phishing attempts, the NVIDIA spear phishing detection AI workflow uses NVIDIA Morpheus and synthetic training data created with the NVIDIA NeMo generative AI framework to flag and halt inbox threats.

The Morpheus SDK also enables digital fingerprinting to collect and analyze behavior characteristics for every user, service, account and machine across a network to identify atypical behavior and alert network operators. With the NVIDIA DOCA software framework, developers can create software-defined, DPU-accelerated services, while leveraging zero trust to build more secure applications.

AI-based cybersecurity empowers developers across industries to build solutions that can identify, capture and act on threats and anomalies to ensure business continuity and uninterrupted service, keeping operations safe and customers happy.

Learn how AI can help your organization achieve a proactive cybersecurity posture to protect customer and proprietary data to the highest standards.

Read More