Recent AI advances enable modeling of weather forecasting 4-5 magnitudes faster than traditional computing methods.
The brightest leaders, researchers and developers in climate science, high performance computing and AI will discuss such technology breakthroughs — and how they can help foster a greener Earth — at NVIDIA GTC.
The virtual conference, running Sept. 19-22, also includes expert talks about more industries that will be transformed by AI, including healthcare, robotics, graphics and the industrial metaverse.
A dozen sessions will cover how accelerated computing can be used to predict, detect and mitigate climate-related issues. Some can’t-miss speakers include the following:
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.
A triple threat steps In the NVIDIA Studio this week: a tantalizing trio of talented 3D artists who each reimagined and remastered classic European buildings with individualistic flair.
Robert Lazăr, Dawid Herda and Dalibor Cee have lived unique creative journeys — from their sources of inspiration; to the tricks they employ in their creative workflows; to the insights they’d share with up-and-coming artists.
While their techniques and styles may differ, they share an NVIDIA Studio-powered workflow. GPU acceleration in creative apps gave them the freedom to fast-track their artistry. AI-powered features accelerated by NVIDIA RTX GPUs reduced repetitive, tedious work, giving back valuable time for them to tinker with and perfect their projects.
Romanian Rendering
Lazăr, who also goes by Eurosadboy, is a self-taught 3D artist with 17 years of experience, as well as an esteemed musician who embarks on a new adventure with each piece that he creates.
While exploring his hometown of Bucharest, Lazăr was delightfully overwhelmed by the Union of Romanian Architects building, with its striking fusion of nostalgia and futurism. Fueled by his passion for science fiction, he saw the opportunity to enhance this iconic building with digital art, featuring elements of the past, present and future.
Lazăr first surveyed the building on site to estimate general sizes, then created a moodboard to gather inspiration from his favorite artists.
“Considering that my style trends toward hyperrealism, and given the need for ray tracing in every scene, it was clear the GPU I chose had to be RTX,” Lazăr said.
With his vision in place, Lazăr opened Cinema 4D software and built models to bring the futuristic creation to life. The NVIDIA RTX GPU-accelerated viewport enabled smooth interactivity for these complex 3D shapes while modeling.
He then generated metal, stone and glass textures within the free JSplacement Classic software, then imported them back to Cinema 4D to apply them to his models. Animated elements were added to create his “space elevator” with rotating disks and unfolding arms.
To ensure the scene was lit identically to the original footage, Lazăr used GPU-accelerated ray tracing in Otoy’s Octane to create an ambient-occlusion effect, achieving photorealistic lighting with lightning speed.
At this stage, Lazăr imported the scene into Adobe After Effects software, then added the digital scene on top of the high-resolution video footage — creating an extraordinarily realistic visual. “The footage was in 4K RAW format, so without the capabilities of the NVIDIA RTX GPU, I wouldn’t have been able to preview in real time — making me spend more time on technical parts and less on creativity,” he said.
Matching colors was critical, the artist added, and thankfully After Effects’ several GPU-accelerated features, including Brightness & Contrast, Change Color and Exposure, helped him get the job done.
Making use of his GeForce 3080 Ti GPU and ASUS ProArt NVIDIA Studio laptop, Lazăr created this work of 3D art faster and more efficiently.
Polish Pride
Dawid Herda, known widely as Graffit, has been an artist for more than a decade. He’s most inspired by his experiences hitchhiking across his home country, Poland.
Visiting Gdańsk, Herda found that the architecture of the city’s 600-year-old maritime crane sparked ideas for artistic transformation. He visualized the crane as a futuristic tower of metal and glass, drawing from the newer glass-fronted buildings that flank the old brick structure.
His workflow takes advantage of NVIDIA Omniverse, a platform for 3D design collaboration and world simulation, free for RTX GPU owners. The open-source, extensible Universal Scene Description file format gave Herda the freedom to work within several 3D apps at once, without having to repeatedly import and export between them. Plus, he shared his creation with fellow artists in real time, without his colleagues requiring advanced hardware.
“All these features make the job of complex design much more efficient, saving me a lot of time and freeing me to focus on creativity,” said Herda.
Herda accessed the Omniverse Connector for Blender to accomplish 3D motion tracking, which is the simulation of live-action camera moves and perspective inside compositing software. From 4K ProRes footage of the crane captured by drone, Herda selected his favorite shots before importing them. He traced the camera movement and mapped perspective in the scene using specific points from the shots.
“You often have to jump between apps, but thanks to NVIDIA Studio, everything becomes faster and smoother,” Herda said.
Then, Herda added his futuristic building variant, which was created and modeled from scratch. The AI denoising feature in the viewport and RTX GPU-accelerated ray tracing gave Herda instant feedback and crisp, beautiful details.
The artist made the foundational 3D model of the crane using simple blocks that were transformed by modeling and detailing each element. He swapped textures accurately in real time as he interacted with the model, achieving the futuristic look without having to wait for iterations of the model to render.
After animating each building shape, Herda quickly exported final frame renders using RTX-accelerated OptiX ray tracing. Then, he imported the project into After Effects, where GPU-accelerated features were used in the composite stage to round out the project.
His creative setup included a home PC equipped with a GeForce RTX 3090 GPU and an ASUS ZenBook Pro Duo NVIDIA Studio laptop with a GeForce RTX 3080 Laptop GPU. This meant Herda could create his photorealistic content anywhere, anytime.
Czech Craft
Dalibor Cee turned a childhood fascination with 3D into a 20-year career. He started working with 3D architectural models before returning home to Prague to specialize in film special effects like fluid simulations, smoke and explosions.
Dalibor also enjoys projection mapping as a way to bring new light and feeling to old structures, such as the astronomical clock on the iconic Orloj building in Prague’s Old Town Square.
Fascinated by the circular elements of the clock, Dalibor reimagined them in his Czech sci-fi-inspired style by creating a lens effect and using shiny, golden elements and crystal shapes.
The artist started in Blender for motion tracking to align his video footage with the 3D building blocks that would make up the main animation. Dalibor then added textures generated using the JSplacement tool. He experimented with colors, materials and masks to alter the glossiness or roughness, emission and specular aspects of each element.
Link objects on curves captured animations with keyframes in Blender.
“I use apps that need NVIDIA CUDA and PhysX, and generally all software has some advantage when used with NVIDIA RTX GPUs in 3D,” Dalibor said.
The models were then linked onto curves to be animated for forward, backward and rotating movements — similar to those of an optical zoom lens, creating animation depth. Dalibor achieved this with dramatic speed by using Blender Cycles RTX-accelerated OptiX ray tracing in the viewport.
This kind of work is very time and memory intensive, Dalibor said, but his two GeForce RTX 3090 Ti GPUs allow him to complete extra-large projects without having to waste hours on rendering. Blender’s Cycles engine with RTX-accelerated OptiX ray tracing and AI denoising enabled Dalibor to render the entire project in just 20 minutes — nearly 20x faster than with the CPU alone, according to his testing.
These time savings allowed Dalibor to focus on creating and animating the piece’s hundreds of elements. He combined colors and effects to bring the model to life in exactly the way he’d envisioned.
NVIDIA Studio systems have become essential for the next generation of 3D content creators, pushing boundaries to create inspirational, thought-provoking and emotionally intensive art.
Studio Success Stories
For a deeper understanding of their workflows, see how Lazăr, Herda and Dalibor brought their creations from concept to completion in their in-depth videos.
In the spirit of learning, the NVIDIA Studio team is posing a challenge for the community to show off personal growth. Participate in the #CreatorsJourney challenge for a chance to be showcased on NVIDIA Studio social media channels.
Entering is easy. Post an older piece of artwork alongside a more recent one to showcase your growth as an artist. Follow and tag NVIDIA Studio on Instagram, Twitter or Facebook, and use the #CreatorsJourney tag to join.
It’s time to show how you’ve grown as an artist (just like @lowpolycurls)!
Join our #CreatorJourney challenge by sharing something old you created next to something new you’ve made for a chance to be featured on our channels.
Want inspiration? Try being charged by a two-ton African black rhino.
Early in her career, wildlife biologist Zoe Jewell and her team came across a mother rhino and her calf and carefully moved closer to get a better look.
The protective mother rhino charged, chasing Jewell across the dusty savannah. Eventually, Jewell got a flimsy thorn bush between herself and the rhino. Her heart was racing.
“I thought to myself, ‘There has to be a better way,’” she said.
In the latest example of how researchers like Jewell are using the latest technologies to track animals less invasively, a team of researchers has proposed harnessing high-flying AI-equipped drones powered by the NVIDIA Jetson edge AI platform to track the endangered black rhino through the wilds of Namibia.
In a paper published this month in the journal PeerJ, the researchers show the potential of drone-based AI to identify animals in even the remotest areas and provide real-time updates on their status from the air.
For more, read the full paper at https://peerj.com/articles/13779/.
While drones — and technology of just about every kind — have been harnessed to track African wildlife, the proposal promises to help gamekeepers move faster to protect rhinos and other megafauna from poachers.
“We have to be able to stay one step ahead,” said Jewell, co-founder of WildTrack, a global network of biologists and conservationists dedicated to non-invasive wildlife monitoring techniques.
Jewell, president and co-founder of WildTrack, has a B.Sc. in Zoology/Physiology, an M.Sc in Medical Parasitology from the London School of Tropical Medicine and Hygiene and a veterinary medical degree from Cambridge University. She has long sought to find less invasive ways to track, and protect, endangered species, such as the African black rhino.
In addition to Jewell, the paper’s authors include conservation biology and data science specialists at UC Berkeley, the University of Göttingen in Germany, Namibia’s Kuzikus Wildlife Reserve and Duke University.
The stakes are high.
African megafauna have become icons, even as global biodiversity declines.
“Only 5,500 black rhinos stand between this magnificent species, which preceded humans on earth by millions of years, and extinction,” Jewell says.
That’s made them bigger targets for poachers, who sell rhino horns and elephant tusks for huge sums, the paper’s authors report. Rhino horns, for example, reportedly go for as much as $65,000 per kilogram.
To disrupt poaching, wildlife managers must deploy effective protection measures.
This, in turn, depends on getting reliable data fast.
The challenge: many current monitoring technologies are invasive, expensive or impractical.
Satellite monitoring is a potential tool for the biggest animals — such as elephants. But detecting smaller species requires higher resolution imaging.
And the traditional practice of capturing rhinos, attaching a radio collar to the animals and then releasing them can be stressful for humans and rhinos.
It’s even been found to depress the fertility of captured rhinos.
High-flying drones are already being used to study wildlife unobtrusively.
But rhinos most often live in areas with poor wireless networks, so drones can’t stream images back in real-time.
As a result, images have to be downloaded when drones return to researchers, who then have to comb through images looking to identify the beasts.
Identifying rhinos instantly onboard a drone and alerting authorities before it lands would ensure a speedy response to poachers.
“You can get a notification out and deploy units to where those animals are straight away,” Jewell said. “You could even protect these animals at night using heat signatures.”
To do this, the paper’s authors propose using an NVIDIA Jetson Xavier NX module onboard a Parrot Anafi drone.
The drone can connect to the relatively poor-quality wireless networks available in areas where rhinos live and deliver notifications whenever the target species are spotted.
To build the drone’s AI, the researchers used a YOLOv5l6 object-detection architecture. They trained it to identify a bounding box for one of five objects of interest in a video frame.
Most of the images used for training were gathered in Namibia’s Kuzikus Wildlife Reserve, an area of roughly 100 square kilometers on the edge of the Kalahari desert.
With tourists gone, Jewell reports that her colleagues in Namibia had plenty of time to gather training images for the AI.
The researchers used several technologies to optimize performance and overcome the challenge of small animals in the data.
These techniques included images of other species in the AI’s training data, emulating field conditions with many animals.
They used data augmentation techniques, such as generative adversarial networks, to train the AI on synthetic data, the paper’s authors wrote.
And they also trained the model on a dataset with many kinds of terrain and images taken from different angles and lighting conditions.
Looking at footage of rhinos gathered in the wild, the AI correctly identified black rhinos — the study’s primary target — 81 percent of the time and giraffes 83 percent of the time, they reported.
The next step: putting this system to work in the wild, where wildlife conversationalists are already deploying everything from cameras to radio collars to track rhinos.
Many of the techniques combine the latest technology with ancient practices.
Jewell and WildTrack co-founder Sky Alibhai have already created a system, FIT, that uses sophisticated new techniques to analyze animal tracks (see image of a rhino track, left). The software, initially developed using morphometrics — or the quantitative analysis of an animal’s form — on JMP statistical analysis software, now uses the latest AI techniques.
Jewell says that modern science and the ancient art of tracking are much more alike than you might think.
“’When you follow a footprint, you’re really recreating the origins of science that shaped humanity,” Jewell said. “You’re deciding who made that footprint, and you’re following a trail to see if you’re correct.”
Jewell and her colleagues are now working to take their work another step forward, to use drones to identify rhino trails in the environment.
“Without even seeing them on the ground we’ll be able to create a map of where they’re going and interacting with each other to help us understand how to best protect them,” Jewell says.
In four talks over two days, senior NVIDIA engineers will describe innovations in accelerated computing for modern data centers and systems at the edge of the network.
Speaking at a virtual Hot Chips event, an annual gathering of processor and system architects, they’ll disclose performance numbers and other technical details for NVIDIA’s first server CPU, the Hopper GPU, the latest version of the NVSwitch interconnect chip and the NVIDIA Jetson Orin system on module (SoM).
The presentations provide fresh insights on how the NVIDIA platform will hit new levels of performance, efficiency, scale and security.
Specifically, the talks demonstrate a design philosophy of innovating across the full stack of chips, systems and software where GPUs, CPUs and DPUs act as peer processors. Together they create a platform that’s already running AI, data analytics and high performance computing jobs inside cloud service providers, supercomputing centers, corporate data centers and autonomous systems.
Inside NVIDIA’s First Server CPU
Data centers require flexible clusters of CPUs, GPUs and other accelerators sharing massive pools of memory to deliver the energy-efficient performance today’s workloads demand.
To meet that need, Jonathon Evans, a distinguished engineer and 15-year veteran at NVIDIA, will describe the NVIDIA NVLink-C2C. It connects CPUs and GPUs at 900 gigabytes per second with 5x the energy efficiency of the existing PCIe Gen 5 standard, thanks to data transfers that consume just 1.3 picojoules per bit.
NVLink-C2C connects two CPU chips to create the NVIDIA Grace CPU with 144 Arm Neoverse cores. It’s a processor built to solve the world’s largest computing problems.
For maximum efficiency, the Grace CPU uses LPDDR5X memory. It enables a terabyte per second of memory bandwidth while keeping power consumption for the entire complex to 500 watts.
One Link, Many Uses
NVLink-C2C also links Grace CPU and Hopper GPU chips as memory-sharing peers in the NVIDIA Grace Hopper Superchip, delivering maximum acceleration for performance-hungry jobs such as AI training.
Anyone can build custom chiplets using NVLink-C2C to coherently connect to NVIDIA GPUs, CPUs, DPUs and SoCs, expanding this new class of integrated products. The interconnect will support AMBA CHI and CXL protocols used by Arm and x86 processors, respectively.
To scale at the system level, the new NVIDIA NVSwitch connects multiple servers into one AI supercomputer. It uses NVLink, interconnects running at 900 gigabytes per second, more than 7x the bandwidth of PCIe Gen 5.
NVSwitch lets users link 32 NVIDIA DGX H100 systems into an AI supercomputer that delivers an exaflop of peak AI performance.
Alexander Ishii and Ryan Wells, both veteran NVIDIA engineers, will describe how the switch lets users build systems with up to 256 GPUs to tackle demanding workloads like training AI models that have more than 1 trillion parameters.
The switch includes engines that speed data transfers using the NVIDIA Scalable Hierarchical Aggregation Reduction Protocol. SHARP is an in-network computing capability that debuted on NVIDIA Quantum InfiniBand networks. It can double data throughput on communications-intensive AI applications.
Jack Choquette, a senior distinguished engineer with 14 years at the company, will provide a detailed tour of the NVIDIA H100 Tensor Core GPU, aka Hopper.
In addition to using the new interconnects to scale to unprecedented heights, it packs many advanced features that boost the accelerator’s performance, efficiency and security.
Hopper’s new Transformer Engine and upgraded Tensor Cores deliver a 30x speedup compared to the prior generation on AI inference with the world’s largest neural network models. And it employs the world’s first HBM3 memory system to deliver a whopping 3 terabytes of memory bandwidth, NVIDIA’s biggest generational increase ever.
Choquette, one of the lead chip designers on the Nintendo64 console early in his career, will also describe parallel computing techniques underlying some of Hopper’s advances.
Michael Ditty, an architecture manager with a 17-year tenure at the company, will provide new performance specs for NVIDIA Jetson AGX Orin, an engine for edge AI, robotics and advanced autonomous machines.
It integrates 12 Arm Cortex-A78 cores and an NVIDIA Ampere architecture GPU to deliver up to 275 trillion operations per second on AI inference jobs. That’s up to 8x greater performance at 2.3x higher energy efficiency than the prior generation.
The latest production module packs up to 32 gigabytes of memory and is part of a compatible family that scales down to pocket-sized 5W Jetson Nano developer kits.
All the new chips support the NVIDIA software stack that accelerates more than 700 applications and is used by 2.5 million developers.
Based on the CUDA programming model, it includes dozens of NVIDIA SDKs for vertical markets like automotive (DRIVE) and healthcare (Clara), as well as technologies such as recommendation systems (Merlin) and conversational AI (Riva).
The NVIDIA AI platform is available from every major cloud service and system maker.
Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.
Imagine taking a selfie and using it to get a moving, talking, customizable 3D avatar of yourself in just seconds.
A new extension for NVIDIA Omniverse, a design collaboration and world simulation platform, enables just that.
Created by developers at software startup in3D, the extension lets people instantly import 3D avatars of themselves into virtual environments using their smartphones. Omniverse Extensions are the core building blocks that let anyone create and extend functions of Omniverse Apps.
The in3D app can now bring people, in their digital forms, into Omniverse. It helps creators build engaging virtual worlds and use these avatars as heroes, actors or spectators in their stories. The app works on any phone with a camera, recreating a user’s full geometry and texture based on a video selfie.
The avatars can even be added into 3D worlds with animations and a customizable wardrobe.
In3D is a member of NVIDIA Inception, a free, global program that nurtures cutting-edge startups.
Simple and Scalable Avatar Creation
Creating a photorealistic 3D avatar has traditionally taken up to several months, with costs reaching up to tens of thousands of dollars. Photogrammetry, a standard approach to creating 3D references of humans from images, is extremely costly, requires a digital studio and lacks scalability.
With in3D, the process of creating 3D avatars is simple and scalable. The app understands the geometry, texture, depth and various vectors of a person via a mobile scan — and uses this information to replicate lifelike detail and create predictive animations for avatars.
Dmitry Ulyanov, CEO of in3D, which is based in Tel Aviv, Israel, said the app captures even small details with centimeter-grade accuracy and automatically fixes lighting. This allows for precise head geometry from a single selfie, as well as estimation of a user’s exact body shape.
For creators building 3D worlds, in3D software can save countless hours, increase productivity and result in substantial cost savings, Ulyanov said.
“Manually creating one avatar can take up to months,” he added. “With in3D’s scanning app and software development kit, a user can scan and upload 21,000 people with a single GPU and mobile phone in the same amount of time.”
Connecting to Omniverse
Ulyanov said that using in3D’s extension with NVIDIA Omniverse Avatar Cloud Engine (ACE) opens up many possibilities for avatar building, as users can easily customize imported avatars from in3D to engage and interact with their virtual worlds — in real time and at scale.
In3D uses Universal Scene Description (USD), an open-source, extensible file format, to seamlessly integrate its high-fidelity avatars into Omniverse. All avatar data is contained in a USD file, removing the need for complex shaders or embeddings. And bringing the avatars into Omniverse only requires a simple drag and drop.
Once imported into Omniverse via USD, the avatars can be used in apps like Omniverse Create and Audio2Face. Users have a complete toolset within Omniverse to support holistic content creation, whether animating avatars’ bodies with the retargeting tool or crafting their facial expressions with Audio2Face.
To build the Omniverse Extension, in3D used Omniverse Kit and followed the development flow using the VSCode computer program. Being able to put a breakpoint anywhere in the code made VSCode an easy-to-use, convenient, out-of-the-box solution for connecting in3D to Omniverse, Ulyanov said.
“The ability to centralize our SDK alongside other software for 3D developers is game changing,” he said. “With our Omniverse Extension now available, we’re looking to expand the base of developers who use our avatars.”
“Having the ability to upload our SDK and connect it with all the tools that 3D developers use has made in3D a tangible solution to deploy across all 3D development environments,” said Sergei Sherman, chief marketing officer at in3D. “This was something we wouldn’t have been able to achieve on our own in such a short amount of time.”
Learn how to connect and create virtual worlds with Omniverse at NVIDIA GTC, the design and simulation conference for the era of AI and the metaverse, running online Sept. 19-22. Registration is free and offers access to dozens of sessions and special events.
Developers can use Omniverse Code to create their own Omniverse Extension for the inaugural #ExtendOmniverse contest by Friday, Sept. 9, at 5 p.m. PT, for a chance to win an NVIDIA RTX GPU. The winners will be announced in the NVIDIA Omniverse User Group at GTC.
Find additional documentation and tutorials in the Omniverse Resource Center, which details how developers like Ulyanov can build custom USD-based applications and extensions for the platform.
When Rachel Carpenter and Joseph French founded Intrinio a decade ago, the fintech revolution had only just begun. But they saw an opportunity to apply machine learning to vast amounts of financial filings to create an alternative data provider among the giants.
The startup, based in St. Petersburg, Fla., delivers financial data to hedge funds, proprietary trading shops, retail brokers, fintech developers and others. Intrinio runs machine learning on AWS instances of NVIDIA GPUs to parse mountains of publicly available financial data.
Carpenter and French realized early that such data was sold for a premium, and that machine learning offered a way to sort through free financial filings to deliver new products.
The company offers information on equities, options, estimates and ETFs — as well as environmental, social and governance data. Its most popular product is equities-fundamentals data.
Intrinio has taken an unbundling approach to traditional product offerings, creating à la carte data services now used in some 450 fintech applications.
“GPUs have helped us unlock data that is otherwise expensive and sourced manually,” said Carpenter, the company’s CEO. “We built a lot of technology with the idea that we wanted to unlock data for innovators in the financial services space.”
Intrinio is a member of NVIDIA Inception, a free, global program designed to support cutting-edge startups.
Partnering With Fintechs
With lower overhead enabled by GPU-driven machine learning for providing financial data, Intrinio has been able to deliver products at lower prices that appeal to startups.
“We have a much smaller and agile team, because a small team — in conjunction with NVIDIA GPUs, TensorFlow, PyTorch and everything else that we’re using — makes our work a lot more automated,” she said.
Its clients include fintech players like Robinhood, FTX, Domain Money, MarketBeat and Alpaca. Another, Aiera, transcribes earnings calls live with its own automated-speech-recognition models driven by NVIDIA GPUs, and relies on Intrinio for financial data.
“Our use of GPUs made our data packages affordable and easy to use for Aiera, so the company is integrating Intrinio financial data into its platform,” said Carpenter.
Aiera needed financial-data-cleansing services for consistent information on company earnings and more. Harnessing Intrinio’s application programming interface, Aiera can access normalized, split-second company financial data.
“GPUs are a critical component of Intrinio’s underlying technology — without them, we wouldn’t have been able to apply machine learning techniques to the cleansing and standardization of fundamental and financial statement data,” said Carpenter.
Servicing Equities, Options, ESG
For equities pricing, Intrinio’s machine learning technology can sort out pricing discrepancies in milliseconds. This results in substantially higher data quality and reliability for users, according to Carpenter. With equity fundamentals, Intrinio automates several key processes, such as entity recognition. Intrinio uses machine learning to identify company names or other key information from unstructured text to ensure the correct categorization of data.
In other cases, Intrinio applies machine learning to reconcile line items from financial statements into standardized buckets so that, for example, you can compare revenue across companies cleanly.
The use of GPUs and machine learning in both of these cases results in higher quality data than a manually-oriented approach. Using Intrinio has shown to decrease by 88% the number of errors requiring corrections compared with manual sorting, according to the company.
For options, Intrinio takes the raw Options Price Reporting Authority (OPRA) feed and applies cutting-edge filtering, algorithms and server architecture to provide its options API
ESG data is also an area of interest for investors right now. As retail investors are starting to be more conscious of the environment and institutions are feeling the pressure to invest responsibly, they want to see how companies stack up with this information.
As regulation around ESG disclosures solidifies, Intrinio says it will be able to use its automated XBRL-standardization technology to unlock these data sets for their users. XBRL is a standardized format of digital information exchange for business.
“On the retail side, app developers need to show this information to their users because people want to see it — making that data accessible is critical to the evolution of the financial industry,” said Carpenter.
Register free for GTC, running online Sept. 19-22, to attend sessions with NVIDIA and dozens of industry leaders. View the financial services agenda for the conference.
AI and the metaverse are revolutionizing every aspect of the way we live, work and play — including how we move.
Leaders in the automotive and technology industries will come together at NVIDIA GTC to discuss the newest breakthroughs driving intelligent vehicles, whether in the real world or in simulation.
The virtual conference, which runs from Sept. 19-22, will feature a slate of in-depth sessions on end-to-end software-defined vehicle development, as well as advances in robotics, healthcare, high performance computing and more. And it’s all free to attend.
Headlining GTC is NVIDIA founder and CEO Jensen Huang, who will present the latest in AI and NVIDIA Omniverse in the keynote address on Tuesday, Sept. 20, at 8 a.m. PT.
Conference attendees will have plenty of networking opportunities, and they can learn from NVIDIA experts and industry luminaries about AV development, from the cloud to the car.
Here’s a brief look at what to expect during GTC:
Meet the Trailblazers
Every stage of the automotive pipeline is being transformed by AI and metaverse technologies, from manufacturing and design, to autonomous driving, to the passenger experience.
Speakers from each of these areas will share how they’re harnessing AI innovations to accelerate software-defined transportation.
Michael Bell, senior vice president of Digital at Lucid Motors, walks through the development of the Lucid Dream Drive Pro advanced driver assistance system, and how the company continuously deploys new features for a cutting-edge driving experience.
Yuli Bai, head of AI Platform at NIO, outlines the AI infrastructure that the automaker is using to develop intelligent, software-defined vehicles running on the NVIDIA DRIVE Orin compute platform.
Apeksha Kumavat, chief engineer and co-founder at Gatik, explains how its autonomous commercial-delivery vehicles are helping the retail industry adapt to rapidly changing consumer demands.
Dennis Nobelius, chief operating officer at Polestar, describes how the performance electric vehicle maker is developing AI-powered features geared toward the human driver, while prioritizing long-term environmental sustainability.
Don’t miss additional sessions from BMW, Mercedes-Benz and Waabi covering manufacturing, AI research and more.
Get the Inside Track on DRIVE Development
Learn about the latest NVIDIA DRIVE technologies directly from the minds behind their creation.
NVIDIA DRIVE Developer Day consists of a series of deep-dive sessions on building safe and robust autonomous vehicles. Led by the NVIDIA engineering team, the talks will highlight the newest DRIVE features and discuss how to apply them to AV development.
Topics include:
NVIDIA DRIVE product roadmap
Intelligent in-vehicle infotainment
Data center development
Synthetic data generation for testing and validation
All of this virtual content is available to GTC attendees — register for free today to see the technologies shaping the intelligent future of transportation.
Cameras have been deployed in factories for over a decade — so why, Franz Tschimben wondered, hasn’t automated visual inspection yet become the worldwide standard?
This question motivated Tschimben and his colleagues to found Covision Quality, an AI-based visual-inspection software startup that uses NVIDIA technology to transform end-of-line defect detection for the manufacturing industry.
“The simple answer is that these systems are hard to scale,” said Tschimben, the northern Italy-based company’s CEO. “Material defects, like burrs, holes or scratches, have varying geometric shapes and colors that make identifying them cumbersome. That meant quality-control specialists had to program inspection systems by hand to fine-tune their defect parameters.”
Covision’s software allows users to train AI models for visual inspection without needing to code. It quadruples accuracy for defect detection and reduces false-negative rates by up to 90% compared with traditional rule-based methods, according to Tschimben.
The software relies on unsupervised machine learning that’s trained on NVIDIA RTX A5000 GPUs. This technique allows the AI in just one hour to teach itself, based on hundreds of example images, what qualifies as a defect for a specific customer. It removes the extensive labeling of thousands of images that’s typically required for a supervised learning pipeline.
The startup is a member of NVIDIA Metropolis — a partner ecosystem centered on vision AI that includes a suite of GPU-accelerated software development kits, pretrained models and the TAO toolkit to supercharge a range of automation applications. Covision is also part of NVIDIA Inception, a free, global program that nurtures cutting-edge startups.
In June, Covision was chosen from hundreds of emerging companies as the winner of a startup award at Automate, a flagship conference on all things automation.
Reducing Pseudo-Scrap Rates
In manufacturing, the pseudo-scrap rate — or the frequency at which products are falsely identified as defective — is a key indicator of a visual-inspection system’s efficiency.
Covision’s software, which is hardware agnostic, reduces pseudo-scrap rates by up to 90%, according to Tschimben.
As an item passes through a production line, a camera captures an image of it. Then, Covision’s real-time AI model analyzes it. Finally, it sends the information to a simple user interface that displays image frames: green for good pieces and red for defective ones.
For GKN Powder Metallurgy, a global producer of 13 million metal parts each day, the above steps can occur in as quick as 200 milliseconds per piece — enabled by Covision software and NVIDIA GPUs deployed at the production line.
Two to six cameras usually inspect one production line at a factory, Tschimben said. And one NVIDIA A5000 GPU on premises can process the images from four production lines in real time.
“NVIDIA GPUs are robust and reliable,” he added. “The TensorRT SDK and CUDA toolkit enable our developers to use the latest resources to build our platform, and the Metropolis program helps us with go-to-market strategy — NVIDIA is a one-stop solution for us.”
Plus, being an Inception member gives Covision access to free credits for NVIDIA Deep Learning Institute courses, which Tschimben said are “very helpful hands-on resources” for the company’s engineers to stay up to date on the latest NVIDIA tech.
Increasing Efficiency, Sustainability in Industrial Production
In addition to identifying defective pieces at production lines, Covision software offers a management panel that displays AI-based data analyses of improvements in a production site’s quality of outputs over time — and more.
“It can show, for example, which site out of a company’s many across the world is producing the best metal pieces with the highest production-line uptime, or which production line within a factory needs attention at a given moment,” Tschimben said.
This feature can help managers make high-level decisions to optimize factory efficiency, globally.
“There’s also a sustainability factor,” Tschimben said. “Companies want to reduce waste. Our software reduces production inefficiencies, increasing sustainability and making the work more streamlined.”
Reducing pseudo-scrap rates using Covision software means that companies can produce materials at higher efficiency and profitability levels, and ultimately waste less.
Covision software is deployed at production sites across the U.S. and Europe for customers including Alupress Group and Aluflexpack, in addition to GKN Powder Metallurgy.
Class is in session this GFN Thursday as GeForce NOW makes the up-grade with support for higher resolutions and frame rates in Chrome browser on PC. It’s the easiest way to spice up a boring study session.
When the lecture is over, dive into the six games joining the GeForce NOW library this week, where new adventure always awaits.
The Perfect Study Break
All work and no play isn’t the GeForce NOW way. No one should be away from their games, even if they’re going back to school. GeForce NOW streams the best PC games across nearly all devices, including low-powered PCs with a Chrome or Edge browser.
RTX 3080 members can now level up their browser gameplay at up to 1440p and 120 frames per second. No app install is required — just open a Chrome or Edge browser on PC, go to play.geforcenow.com, select these new resolutions and refresh rates from the GeForce NOW Settings menu, and jump into games in seconds, with less friction or downloads.
It’s never been easier to explore the more than 1,300 titles in the GeForce NOW library. Have some downtime during lab work? Sneak in a round of Apex Legends. Need a break from a boring textbook? Take a trip to Tevyat in Genshin Impact.
Stay connected with friends for multiplayer — like in Path of Exile’s latest expansion, “Lake of Kalandra” — so even if making your next moves at different schools, the squad can stick together to get into the gaming action.
Mapping the immune system could lead to the creation of drugs that help our bodies win the fight against cancer and other diseases. That’s the big idea behind immunotherapy. The problem: the immune system is incredibly complex.
Enter Immunai, a biotech company that’s using cutting-edge genomics & ML technology to map the human immune system and develop new immunotherapies against cancer and autoimmune diseases.
On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Luis Voloch, co-founder and CTO of Immunai, about tackling the challenges of the immune system with a machine learning and data science mindset.
It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.
Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.
Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.
Subscribe to the AI Podcast: Now Available on Amazon Music