Leonardo da Vinci’s portrait of Jesus, known as Salvator Mundi, was sold at a British auction for nearly half a billion dollars in 2017, making it the most expensive painting ever to change hands.
However, even art history experts were skeptical about whether the work was an original of the master rather than one of his many protégés.
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to determine that this painting was likely an authentic da Vinci.
Authenticating art is a great challenge, as the characteristics of a painting that distinguish one artist’s work from another’s are very subtle. Determining if a piece is authentic requires an extremely fine analysis of a painting’s highly detailed variants.
Using large datasets, the Franks trained convolutional neural networks to examine small, manageable segments of masterpieces to analyze and classify their artists’ patterns, down to their brush strokes. The model determined that the Salvator Mundi painting sold five years ago is likely the real work of da Vinci.
Tweetables:
AI might sometimes “be wrong, but it will always be objective, if you train it properly.” — Steven Frank [10:48]
“The most fascinating thing about AI research these days is that you can do cutting-edge AI research on an inexpensive PC … as long as it has an NVIDIA GPU.” — Steven Frank [22:43]
Researchers in the Department of Anthropology at Northern Arizona University are using GPU-based deep learning algorithms to categorize sherds — tiny fragments of ancient pottery.
Endangered species can be challenging to study, as they are elusive and the very act of observing them can disrupt their lives. Now, scientists can take a closer look at endangered species by studying AI-generated 3D representations of them.
Carol Song is opening a door for researchers to advance science on Anvil, Purdue University’s new AI-ready supercomputer, an opportunity she couldn’t have imagined as a teenager in China.
“I grew up in a tumultuous time when, unless you had unusual circumstances, the only option for high school grads was to work alongside farmers or factory workers, then suddenly I was told I could go to college,” said Song, now the project director of Anvil.
And not just any college. Her scores on a national entrance exam opened the door to Tsinghua University, home to China’s most prestigious engineering school.
Along the way, someone told her computers would be big, so she signed up for computer science before she had ever seen a computer. She learned soon enough.
“We were building hardware from the ground up, designing microinstructions and logic circuits, so I got to understand computers from the inside out,” she said.
Easing Access to Supercomputers
Skip forward a few years to grad school at the University of Illinois when another big door opened.
While working in distributed systems, she was hired as one of the first programmers at the National Center for Supercomputing Applications, one of the first sites in a U.S. program funding supercomputers that researchers shared.
To make the systems more accessible, she helped develop alternatives to the crude editing tools of the day that displayed one line of a program at a time. And she helped pioneering researchers like Michael Norman create visualizations of their work.
GPUs Add AI to HPC
In 2005, she joined Purdue, where she has helped manage nearly three dozen research projects representing more than $60 million in grants as a senior research scientist in the university’s supercomputing center.
“All that helped when we started defining Anvil. I see researchers’ pain points when they are getting on a new system,” said Song.
The system, built by Dell Technologies, will deliver 5.3 petaflops and half a million GPU cycles per year to tens of thousands of researchers across the U.S. working on the National Science Foundation’s XSEDE network.
Anvil Forges Desktop, Cloud Links
To harness that power, Anvil supports interactive user interfaces as well as the batch jobs that are traditional in high performance computing.
“Researchers can use their favorite tools like Jupyter notebooks and remote desktop interfaces so the cluster can look just like in their daily work environment,” she said.
Anvil will also support links to Microsoft Azure, so researchers can access its large datasets and commercial cloud-computing muscle. “It’s an innovative part of this system that will let researchers experiment with creating workflows that span research and commercial environments,” Song said.
Fighting COVID, Exploring AI
More than 30 research teams have already signed up to be early users of Anvil.
One team will apply deep learning to medical images to improve diagnosis of respiratory diseases including COVID-19. Another will build causal and logical check points into neural networks to explore why deep learning delivers excellent results.
“We’ll support a lot of GPU-specific tools like NGC containers for accelerated applications, and as with every new system, users can ask for additional toolkits and libraries they want,” she said.
The Anvil team aims to invite industry collaborations to test new ideas using up to 10 percent of the system’s capacity. “It’s a discretionary use we want to apply strategically to enable projects that wouldn’t happen without such resources,” she said.
Opening Doors for Science and Inclusion
Early users are working on Anvil today and the system will be available for all users in about a month.
Anvil’s opening day has a special significance for Song, one of the few women to act as a lead manager for a national supercomputer site.
Carol Song and Purdue’s Anvil supercomputer
“I’ve been fortunate to be in environments where I’ve always been encouraged to do my best and given opportunities,” she said.
“Around the industry and the research computing community there still aren’t a lot of women in leadership roles, so it’s an ongoing effort and there’s a lot of room to do better, but I’m also very enthusiastic about mentoring women to help them get into this field,” she added.
Purdue’s research computing group shares Song’s enthusiasm about getting women into supercomputing. It’s home to one of the first chapters of the international Women in High-Performance Computing organization.
Purdue’s Women in HPC chapter sent an all-female team to a student cluster competition at SC18. It also hosts outside speakers, provides travel support to attend conferences and connects students and early career professionals to experienced mentors like Song.
Pictured at top: Carol Song, Anvil’s principal investigator (PI) and project director along with Anvil co-PIs (from left) Rajesh Kalyanam, Xiao Zhu and Preston Smith.
Whether facilitating cancer screenings, cutting down on false positives, or improving tumor identification and treatment planning, AI is a powerful agent for healthcare innovation and acceleration.
Yet, despite its promise, integrating AI into actual solutions can challenge many IT organizations.
The Netherlands Cancer Institute (NKI), one of the world’s top-rated cancer research and treatment centers, is using the NVIDIA AI Enterprise software suite to test AI workloads on higher-precision 3D cancer scans than are commonly used today.
NKI’s AI model was previously trained on lower-resolution images. But with the higher memory capacity offered by NVIDIA AI Enterprise, its researchers could instead use high-resolution images for training. This improvement helps clinicians better target the size and location of a tumor every time a patient receives treatment.
The NVIDIA AI Enterprise suite that NKI deployed is designed to optimize the development and deployment of AI. It’s certified and supported by NVIDIA to enable hospitals, researchers and IT professionals to run AI workloads on mainstream servers with VMware vSphere in their on-prem data centers and private clouds.
Delivering treatments on virtualized infrastructure means hospitals and research institutions can use the same tools they already work with on existing applications. This helps maximize their investments while making innovations in care more affordable and accessible.
NKI used an AI model to better reconstruct a Cone Beam Computed Tomography (CBCT) thoracic image, resulting in clearer image quality compared to conventional methods.
Speeding Breakthroughs in Healthcare Research
NKI had gotten off to a quick start with its project on NVIDIA AI Enterprise by using NVIDIA LaunchPad.
The LaunchPad program provides immediate access to optimized software running on accelerated infrastructure to help customers prototype and test data science and AI workloads. This month, the program was extended to nine Equinix locations worldwide.
The NVIDIA AI Enterprise software suite, available in LaunchPad, makes it possible to run advanced AI workloads on mainstream accelerated servers with VMware vSphere, including systems from Dell Technologies, Hewlett Packard Enterprise, Lenovo and many others.
Rhino Health, a federated learning platform powered by NVIDIA FLARE, is available today through NVIDIA AI Enterprise, making it easy for any hospital to leverage Federated learning for AI development and validation. Other organizations, like The American College of Radiology’s AI LAB, are also planning to use the NVIDIA AI Enterprise software.
Researchers at NKI used NVIDIA AI Enterprise, running on the HPE Synergy, a composable software system from Hewlett Packard Enterprise, to build deep learning models by combining the massive 2D and 3D data sources and AI to pinpoint the location of tumors before each radiotherapy treatment session.
“Doctors could use this solution as an alternative to CT scans on day of treatment to optimize the treatment plan to validate the radiotherapy plan,” said Jonas Teuwen, group leader at the Netherlands Cancer Institute.
Using NVIDIA AI Enterprise, Teuwen’s team in Amsterdam ran their workloads on NVIDIA A100 80GB GPUs in a server hosted in Silicon Valley. Their convolutional neural network was built in less than three months and was trained on less than 300 clinical lung CT scans that were then reconstructed and generalized to head and neck data.
In the future, NKI researchers also hope to translate this work to potential use cases in interventional radiology to repair arteries in cardiac surgeries and dental surgery implants.
Accelerating Hospital AI Deployment With NVIDIA AI Enterprise
NVIDIA AI Enterprise simplifies the AI rollout experience for organizations who host a variety of healthcare and operations applications on virtualized infrastructure. It enables IT administrators to run AI applications like Vyasa and iCAD alongside core hospital applications, streamlining the workflow in an environment they’re already familiar with.
Compute resources can be adjusted with just a few clicks, giving hospitals the ability to transform care for both patients and healthcare providers.
Vyasa, a provider of deep learning analysis tools for healthcare and life sciences, uses NVIDIA AI Enterprise to build applications that can search unstructured content such as patient care records. With the software, Vyasa can develop their deep learning applications faster and help dive through unstructured data and PDFs to assess which patients are at a higher risk. It identifies those who haven’t been in for a check-up in more than a year, and can refine for additional risk factors like age and race.
“NVIDIA AI Enterprise has reduced our deployment times by half thanks to rapid provisioning of platform requirements that eliminate the need to manually download and integrate software packages,” said Frans Lawaetz, CIO at Vyasa.
Radiologists use iCAD’s innovative ProFound AI software to assist with reading mammograms. These AI solutions help identify cancer earlier, categorize breast density, and accurately assess short-term personalized breast cancer risk based on each woman’s screening mammogram. Running advanced workloads with VMware vSphere is important for iCAD’s healthcare customers as they can easily integrate their data intensive applications into any hospital infrastructure.
A host of other software makers, like the American College of Radiology’s AI LAB and Rhino Health, with its federated learning platform, have begun validating their software on NVIDIA AI Enterprise to ease deployment by integrating into a common healthcare IT infrastructure.
The ability for NVIDIA AI Enterprise to unify the data center for healthcare organizations has sparked the creation of an ecosystem with NVIDIA technology at its heart. The common NVIDIA and VMware infrastructure benefits software vendors and healthcare organizations alike by making the deployment and management of these solutions much easier.
For many healthcare IT and software companies, integrating AI into hospital environments is a top priority. Many NVIDIA Inception partners will be testing the ease of deploying their offerings on NVIDIA AI Enterprise in these types of environments. They include Aidence, Arterys, contextflow, ImageBiopsy Lab, InformAI, MD.ai, methinks.ai, RADLogics, Sciberia, Subtle Medical and VUNO.
NVIDIA Inception is a program that offers go-to-market support, expertise and technology for AI, data science and HPC startups.
Main image shows how NVIDIA AI Enterprise allows hospital IT administrators to run AI applications alongside core hospital applications, like iCAD Profound AI Software for mammograms.
NVIDIA is making it easier than ever for researchers to harness federated learning by open-sourcing NVIDIA FLARE, a software development kit that helps distributed parties collaborate to develop more generalizable AI models.
Federated learning is a privacy-preserving technique that’s particularly beneficial in cases where data is sparse, confidential or lacks diversity. But it’s also useful for large datasets, which can be biased by an organization’s data collection methods, or by patient or customer demographics.
NVIDIA FLARE — short for Federated Learning Application Runtime Environment — is the engine underlying NVIDIA Clara Train’s federated learning software, which has been used for AI applications in medical imaging, genetic analysis, oncology and COVID-19 research. The SDK allows researchers and data scientists to adapt their existing machine learning and deep learning workflows to a distributed paradigm.
Making NVIDIA FLARE open source will better empower cutting-edge AI in almost any industry by giving researchers and platform developers more tools to customize their federated learning solutions.
With the SDK, researchers can choose among different federated learning architectures, tailoring their approach for domain-specific applications. And platform developers can use NVIDIA FLARE to provide customers with the distributed infrastructure required to build a multi-party collaboration application.
Flexible Federated Learning Workflows for Multiple Industries
Federated learning participants work together to train or evaluate AI models without having to pool or exchange each group’s proprietary datasets. NVIDIA FLARE provides different distributed architectures that accomplish this, including peer-to-peer, cyclic and server-client approaches, among others.
The server-client architecture was also used for two federated learning collaborations using NVIDIA FLARE: NVIDIA worked with Roche Digital Pathology researchers to run a successful internal simulation using whole slide images for classification, and with Netherlands-based Erasmus Medical Center for an AI application that identifies genetic variants associated with schizophrenia cases.
But not every federated learning application is suited to the server-client approach. By supporting additional architectures, NVIDIA FLARE will make federated learning accessible to a wider range of applications. Potential use cases include helping energy companies analyze seismic and wellbore data, manufacturers optimize factory operations and financial firms improve fraud detection models.
NVIDIA FLARE Integrates With Healthcare AI Platforms
“Open-sourcing NVIDIA FLARE to accelerate federated learning research is especially important in the healthcare sector, where access to multi-institutional datasets is crucial, yet concerns around patient privacy can limit the ability to share data,” said Dr. Jayashree Kalapathy, associate professor of radiology at Harvard Medical School and leader of the MONAI community’s federated learning working group. “We are excited to contribute to NVIDIA FLARE and continue the integration with MONAI to push the frontiers of medical imaging research.”
NVIDIA FLARE will also be used to power federated learning solutions at:
American College of Radiology (ACR): The medical society has worked with NVIDIA on federated learning studies that apply AI to radiology images for breast cancer and COVID-19 applications. It plans to distribute NVIDIA FLARE in the ACR AI-LAB, a software platform that is available to the society’s tens of thousands of members.
Flywheel: The company’s Flywheel Exchange platform enables users to access and share data and algorithms for biomedical research, manage federated projects for analysis and training, and choose their preferred federated learning solution — including NVIDIA FLARE.
Taiwan Web Service Corporation: The company offers a GPU-powered MLOps platform that enables customers to run federated learning based on NVIDIA FLARE. Five medical imaging projects are currently being conducted on the company’s private cluster, each with several participating hospitals.
Rhino Health: The partner and member of the NVIDIA Inception program has integrated NVIDIA FLARE into its federated learning solution, which is helping researchers at Massachusetts General Hospital develop an AI model that more accurately diagnoses brain aneurysms, and experts at the National Cancer Institute’s Early Detection Research Network develop and validate medical imaging AI models that identify early signs of pancreatic cancer.
“To collaborate effectively and efficiently, healthcare researchers need a common platform for AI development without the risk of breaching patient privacy,” said Dr. Ittai Dayan, founder of Rhino Health. “Rhino Health’s ‘Federated Learning as a Platform’ solution, built with NVIDIA FLARE, will be a useful tool to help accelerate the impact of healthcare AI.”
Get started with federated learning by downloading NVIDIA FLARE. Hear more about NVIDIA’s work in healthcare by tuning in to a special address on Nov. 29 at 6 p.m. CT by David Niewolny, director of healthcare business development at NVIDIA, at RSNA, the Radiological Society of North America’s annual meeting.
Happy Thanksgiving, members. It’s a very special GFN Thursday.
As the official kickoff to what’s sure to be a busy holiday season for our members around the globe, this week’s GFN Thursday brings a few reminders of the joys of PC gaming in the cloud.
Plus, kick back for the holiday with four new games coming to the GeForce NOW library this week.
Game Away the Holiday
With the power of the cloud, any laptop can be a gaming laptop — even a Mac or Chromebook.
The holidays are often spent celebrating with extended family — which is great, until Aunt Petunia starts trying to teach you cross-stitch or Grandpa Harold begins another one of his fishing trip stories. If you need a break from the relatives, get your gaming in, powered by the cloud.
With GeForce NOW, nearly any device can become a GeForce gaming rig. Grab Uncle Buck’s Chromebook and get a few rounds of Apex Legends in, or check in with Star-Lord and the crew from your mobile device in Marvel’s Guardians of the Galaxy. You can even squad up on some Macbooks with your cousins for a few Destiny 2 raids at the kid’s table, where we know the real fun is.
How about escaping for a bit to a tropical jungle? For a limited time, get a copy of Crysis Remastered free with the purchase of a six-month Priority membership or the new GeForce NOW RTX 3080 membership. Terms and conditions apply.
GeForce NOW members can experience the first game in the Crysis series — or 1,000+ more games — across nearly all of their devices, turning even a Mac or a mobile device into the ultimate gaming rig. It’s the perfect way to keep the gaming going after pumpkin pie is served.
The Gift of Gaming
The easiest upgrade in PC gaming makes a perfect gift for gamers.
GeForce NOW Priority Membership digital gift cards are now available in 2-month, 6-month or 12-month options. Give the gift of powerful PC gaming to a special someone who uses a low-powered device, a budding gamer using a Mac, or a squadmate who’s gaming on the go.
Gift cards can be redeemed on an existing GeForce NOW account or added to a new one. Existing Founders and Priority members will have the number of months added to their accounts.
Eat, Play and Be Merry
Make your way up from the bottom to the top, confront the tyrannical Keymaster and take your revenge in Ghostrunner, streaming on GeForce NOW.
Between bites of stuffing and mashed potatoes, members can look for the following games joining the GeForce NOW library:
Fate Seeker II (day-and-date release on Steam, Nov. 23)
theHunter: Call of the Wild (day-and-date release on Epic Games Store, Nov. 25)
We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.
We initially planned to add Farming Simulator 2022 to GeForce NOW in November, but discovered an issue during our onboarding process. We hope to add the game in the coming weeks.
Whether you’re celebrating Thanksgiving or just looking forward to a gaming-filled weekend, tell us what you’re thankful for on Twitter or in the comments below.
A picture worth a thousand words now takes just three or four words to create, thanks to GauGAN2, the latest version of NVIDIA Research’s wildly popular AI painting demo.
The deep learning model behind GauGAN allows anyone to channel their imagination into photorealistic masterpieces — and it’s easier than ever. Simply type a phrase like “sunset at a beach” and AI generates the scene in real time. Add an additional adjective like “sunset at a rocky beach,” or swap “sunset” to “afternoon” or “rainy day” and the model, based on generative adversarial networks, instantly modifies the picture.
With the press of a button, users can generate a segmentation map, a high-level outline that shows the location of objects in the scene. From there, they can switch to drawing, tweaking the scene with rough sketches using labels like sky, tree, rock and river, allowing the smart paintbrush to incorporate these doodles into stunning images.
The new GauGAN2 text-to-image feature can now be experienced on NVIDIA AI Demos, where visitors to the site can experience AI through the latest demos from NVIDIA Research. With the versatility of text prompts and sketches, GauGAN2 lets users create and customize scenes more quickly and with finer control.
An AI of Few Words
GauGAN2 combines segmentation mapping, inpainting and text-to-image generation in a single model, making it a powerful tool to create photorealistic art with a mix of words and drawings.
The demo is one of the first to combine multiple modalities — text, semantic segmentation, sketch and style — within a single GAN framework. This makes it faster and easier to turn an artist’s vision into a high-quality AI-generated image.
Rather than needing to draw out every element of an imagined scene, users can enter a brief phrase to quickly generate the key features and theme of an image, such as a snow-capped mountain range. This starting point can then be customized with sketches to make a specific mountain taller or add a couple trees in the foreground, or clouds in the sky.
It doesn’t just create realistic images — artists can also use the demo to depict otherworldly landscapes.
Imagine for instance, recreating a landscape from the iconic planet of Tatooine in the Star Wars franchise, which has two suns. All that’s needed is the text “desert hills sun” to create a starting point, after which users can quickly sketch in a second sun.
It’s an iterative process, where every word the user types into the text box adds more to the AI-created image.
The AI model behind GauGAN2 was trained on 10 million high-quality landscape images using the NVIDIA Selene supercomputer, an NVIDIA DGX SuperPOD system that’s among the world’s 10 most powerful supercomputers. The researchers used a neural network that learns the connection between words and the visuals they correspond to like “winter,” “foggy” or “rainbow.”
Compared to state-of-the-art models specifically for text-to-image or segmentation map-to-image applications, the neural network behind GauGAN2 produces a greater variety and higher quality of images.
The GauGAN2 research demo illustrates the future possibilities for powerful image-generation tools for artists. One example is the NVIDIA Canvas app, which is based on GauGAN technology and available to download for anyone with an NVIDIA RTX GPU.
NVIDIA Research has more than 200 scientists around the globe, focused on areas including AI, computer vision, self-driving cars, robotics and graphics. Learn more about their work.
You don’t need a private plane to be at the forefront of personal travel.
Electric automaker Xpeng took the wraps off the G9 SUV this week at the international Auto Guangzhou show in China. The intelligent, software-defined vehicle is built on the high-performance compute of NVIDIA DRIVE Orin and delivers AI capabilities that are continuously upgraded with each over-the-air update.
The new flagship SUV debuts Xpeng’s centralized electronic and electrical architecture and Xpilot 4.0 advanced driver assistance system for a seamless driving experience. The G9 is also compatible with the next-generation “X-Power” superchargers for charging up to 124 miles in 5 minutes.
The Xpeng G9 and its fellow EVs are elevating the driving experience with intelligent features that are always at the cutting edge.
Intelligence at the Edge
The G9 is intelligently designed from the inside out.
The SUV is the first to be equipped with Xpilot 4.0, an AI-assisted driving system capable of address-to-address automated driving, including valet parking.
Xpilot 4.0 is built on two NVIDIA DRIVE Orin systems-on-a-chip (SoC), achieving 508 trillion operations per second (TOPS). It uses an 8-million-pixel front-view camera and 2.9-million-pixel side-view cameras that cover front, rear, left and right views, as well as a highly integrated and expandable domain controller.
This technology is incorporated into a centralized compute architecture for a streamlined design, powerful performance and seamless upgrades.
Charging Ahead
The G9 is designed for the international market, bringing software-defined innovation to roads around the world.
It incorporates new signature details, such as daytime running lights designed to make a sharp-eyed impression. Four daytime running lights at the top and bottom of the headlights form the Xpeng logo. These headlights also include discrete lidar sensors, merging cutting-edge technology with an elegant exterior.
In addition to fast charging, the electric SUV meets global sustainability requirements as well as NCAP five-star safety standards. The G9 is scheduled to officially launch in China in the third quarter of 2022, with plans to expand to global markets soon after.
The intelligent EV joined a growing lineup of software-defined vehicles powered by NVIDIA DRIVE that are transforming the way the world moves.Also on the Auto Guangzhou showfloor until the event closes on Nov. 28 are the Human HorizonsHiPhi Z Digital-GT, NIOET7 and SAIC’sIM Motors all-electric lineup, displaying the depth and diversity of the NVIDIA DRIVE Orin ecosystem.
When Julien Trombini and Guillaume Cazenave founded video-analytics startup Two-i four years ago, they had an ambitious goal: improving the quality of urban life by one day being able to monitor a city’s roads, garbage collection and other public services.
Along the way, the pair found a wholly different niche. Today, the company’s technology — which combines computer vision, data science and deep learning — is helping to prevent deadly accidents in the oil and gas industry, one of the world’s most dangerous sectors.
Initially, Trombini and Cazenave envisioned a system that would enable civic leaders to see what improvements were needed across a municipality.
“It would be like having a weather map of the city, only one that measures efficiency,” said Trombini, who serves as chairman of Two-i, an NVIDIA Metropolis partner based in Metz, a historic city in northeast France.
That proved a tall order, so the two refocused on specific facilities, such as stadiums, retirement homes and transit stations, where its tech helps with security and incident detection. For instance, it can alert the right people when a retirement home resident falls in a corridor. Or when a transit rider using a wheelchair can’t get on a train because of a broken lift.
Two-i founders Julien Trombini (left) and Guillaume Cazenave.
More recently, the company was approached by ExxonMobil to help with a potentially deadly issue: improving worker safety around open oil tanks.
Together with the energy giant, Two-i has created an AI-enabled video analytics application to detect when individuals near a danger zone and risk falling and immediately alert others to take quick action. In its initial months of operation, the vision AI system prevented two accidents from occurring.
While this use case is highly specific, the company’s AI architecture is designed to flexibly support many different algorithms and functions.
“The algorithms are exactly the same as what we’re using for different clients,” said Trombini. “It’s the same technology, but it’s packaged in a different way.”
Making the Most of Vision AI
Two-i’s flexibility stems from its reliance on using the NVIDIA Metropolis platform for AI-enabled video analytics applications, leveraging advanced tools and adopting a full-stack approach.
To do so, it relies on a variety of NVIDIA-Certified Systems, using the latest workstation and data center GPUs based on the high-performance NVIDIA Ampere architecture, for both training and inference. To shorten training times further, Two-i is looking to test its huge image dataset on the powerful NVIDIA A100 GPU.
The company looks to frequently upgrade its GPUs to ensure it’s offering customers the fastest possible solution, no matter how many cameras are feeding data into its system.
“The time we can save there is crucial, and the better the hardware, the more accurate the results and faster we get to market,” said Trombini.
Two-i taps the CUDA 11.1 toolkit and cuDNN 8.1 library to support its deep learning process, and NVIDIA TensorRT to accelerate inference throughput.
Trombini says one of the most compelling pieces of NVIDIA tech is the NVIDIA TAO Toolkit, which helps the company keep costs down as it tinkers with its algorithms.
“The heavier the algorithm, the more expensive,” he said. “We use the TAO toolkit to prune algorithms and make them more tailored to the task.”
For example, training that initially took up to two weeks has been slashed to three days using the NVIDIA TAO Toolkit, a CLI- and Jupyter Notebook-based version of the NVDIA train, adapt and optimize framework.
Two-i has also started benchmarking NVIDIA’s pretrained models against its algorithms and begun using the NVIDIA DeepStream SDK to enhance its video analytics pipeline.
Building on Success
Two-i sees its ability to solve complicated problems in a variety of settings, such as for ExxonMobil, as a springboard to swinging back around to its original smart city aspirations.
Already, it’s monitoring all roads in eight European cities, analyzing traffic flows and understanding where cars are coming from and going to.
Trombini recognizes that Two-i has to keep its focus on delivering one benefit after another to achieve the company’s long-term goals.
“It’s coming slowly,” he said, “but we are starting to implement our vision.”
By the time the night was over, it felt like Jensen Huang had given everyone in the ballroom a good laugh and a few things to think about.
The annual dinner of the Semiconductor Industry Association — a group of companies that together employ a quarter-million workers in the U.S. and racked up U.S. sales over $200 billion last year — attracted the governors of Indiana and Michigan and some 200 industry executives, including more than two dozen chief executives.
They came to network, get an update on the SIA’s work in Washington, D.C., and bestow the 2021 Robert N. Noyce award, their highest honor, on the founder and CEO of NVIDIA.
“Before we begin, I want to say it’s so nice to be back in person,” said John Neuffer, SIA president and CEO, to applause from a socially distanced audience.
The group heard comments on video from U.S. Senator Chuck Schumer, of New York, and U.S. Commerce Secretary Gina Raimondo about pending legislation supporting the industry.
Recognizing ‘an Icon’
Turning to the Noyce award, Neuffer introduced Huang as “an icon in our industry. From starting NVIDIA in a rented townhouse in Fremont, California, in 1993, he has become one of the industry’s longest-serving and most successful CEOs of what is today by market cap the world’s eighth most valuable company,” he said.
“I accept this on behalf of all NVIDIA’s employees because it reflects their body of work,” Huang said. “However, I’d like to keep this at my house,” he quipped.
Since 1991, the annual Noyce award has recognized tech and business leaders including Jack Kilby (1995), an inventor of the integrated circuit that paved the way for today’s chips.
Two of Huang’s mentors won Noyce awards — Morris Chang, the founder and former CEO of TSMC, the world’s first and largest chip foundry in 2008, and, in 2018, John Hennessy, the Alphabet chairman and former Stanford president. Huang, his former student, interviewed Hennessy on stage at the 2018 event.
Programming on an Apple II
In an on-stage interview with John Markoff, author and former senior technology writer for The New York Times, Huang shared some of his story and his observations on technology and the industry.
He recalled high school days programming on an Apple II computer, getting his first job as a microprocessor designer at AMD and starting NVIDIA with Chris Malachowsky and Curtis Priem.
“Chris and Curtis are the two brightest engineers I have met … and all of us loved building computers. Success has a lot to do with luck, and part of my luck was meeting them,” he said.
Making Million-x Leaps
Fast-forwarding to today, he shared his vision for accelerated computing with AI in projects like Earth-2, a supercomputer for climate science.
“We will build a digital twin of Earth and put some of the brightest computer scientists on the planet to work on it” to explore and mitigate impacts of climate change, he said. “We could solve some of the problems in climate science in our generation.”
He also expressed optimism about Silicon Valley’s culture of innovation.
“The concept of Silicon Valley doesn’t have to be geographic, we can carry this sensibility all over the world, but we have to be mindful of being humble and recognize we’re not here alone, so we need to be in service to others,” he said.
A Pivotal Role in AI
The Noyce award came two months after TIME Magazine named Huang one of the 100 most influential people of 2021. He was one of seven honored on the iconic weekly magazine’s cover along with U.S. President Joe Biden, Tesla CEO Elon Musk and singer Billie Eilish.
A who’s who of tech luminaries including executives from Adobe, IBM and Zoom shared stories of Huang and NVIDIA’s impact in a video, included below, screened at the event. In it, Andrew Ng, a machine-learning pioneer and entrepreneur described the pivotal role NVIDIA’s CEO has played in AI.
“A lot of the progress in AI over the last decade would not have been possible if not for Jensen’s visionary leadership,” said Ng, founder and CEO of DeepLearning.AI and Landing AI. “His impact on the semiconductor industry, AI and the world is almost incalculable.”
Manufacturers are bringing product designs to life in a newly immersive world.
Rendermedia, based in the U.K., specializes in immersive solutions for commerce and industries. The company provides clients with tools and applications for photorealistic virtual, augmented and extended reality (collectively known as XR) in areas like product design, training and collaboration.
With NVIDIA RTX graphics and NVIDIA CloudXR, Rendermedia helps businesses get their products in the hands of customers and audiences, allowing them to interact and engage collaboratively on any device, from any location.
Expanding XR Spaces With CloudXR
Previously, Rendermedia could only deliver realistic rendered products to customers through a CG rendered film, which was often time-consuming to create. It also didn’t allow for consumers to dynamically interact with the product.
With NVIDIA CloudXR, Rendermedia and its product manufacturing clients can quickly render and create fully interactive simulated products in photographic detail, while also reducing their time to market.
This can be achieved by transforming raw product computer-aided design (CAD) into a realistic digital twin of the product. The digital twin can then be used across the entire organization, from sales and marketing to health and safety teams.
Rendermedia can also use CloudXR to offer organizations the ability to design, market, sell and train different teams and customers around their products in different languages worldwide.
“With both the range of 3D data evolving and devices enabling us to interact with products and environments in scale, this ultimately drives the demands around the complexity and sophistication across products and environments within an organization,” said Rendermedia founder Mark Miles.
Rendermedia customers Airbus and National Grid are using VR experiences to showcase future products and designs in realistic scenarios.
Airbus, which designs, manufactures and sells aerospace products worldwide, has worked with Rendermedia on over 35 virtual experiences. Recently, Rendermedia helped bring Airbus’ vision to life by creating VR experiences that allowed users to experience its newest products in complete context and at scale.
National Grid is an electricity and gas utility company headquartered in the U.K. With the help of Rendermedia, National Grid used photorealistic digital twins of real-life industrial sites for virtual training for employees.
The power of NVIDIA CloudXR and RTX technology allows product manufacturers to visualize designs and 3D models using Rendermedia’s platform with more realism. And they can easily make changes to designs in real time, helping users iterate more often and get to final product designs quicker. CloudXR is cost-efficient and provides common standards for training across every learner.
“CloudXR combined with RTX means that our customers can virtualize any part of their business and access it on any device at scale,” said Miles. “This is especially important in training, where the abundance of platforms and devices that people consume can vary widely. CloudXR means that any training content can be consumed at the same level of detail, so content does not have to be readapted for different devices.”
With NVIDIA CloudXR, Rendermedia can further push the boundaries of photorealistic graphics in immersive environments, all without worrying about delivering to different devices and audiences.
Learn more about NVIDIA CloudXR and how it can enhance workflows.