An AI-Enabled Drone Could Soon Become Every Rhino Poacher’s… Horn Enemy

Want inspiration? Try being charged by a two-ton African black rhino.

Early in her career, wildlife biologist Zoe Jewell and her team came across a mother rhino and her calf and carefully moved closer to get a better look.

The protective mother rhino charged, chasing Jewell across the dusty savannah. Eventually, Jewell got a flimsy thorn bush between herself and the rhino. Her heart was racing.

“I thought to myself, ‘There has to be a better way,’” she said.

In the latest example of how researchers like Jewell are using the latest technologies to track animals less invasively, a team of researchers has proposed harnessing high-flying AI-equipped drones powered by the NVIDIA Jetson edge AI platform to track the endangered black rhino through the wilds of Namibia.

In a paper published this month in the journal PeerJ, the researchers show the potential of drone-based AI to identify animals in even the remotest areas and provide real-time updates on their status from the air.

For more, read the full paper at https://peerj.com/articles/13779/.

While drones — and technology of just about every kind — have been harnessed to track African wildlife, the proposal promises to help gamekeepers move faster to protect rhinos and other megafauna from poachers.

“We have to be able to stay one step ahead,” said Jewell, co-founder of WildTrack, a global network of biologists and conservationists dedicated to non-invasive wildlife monitoring techniques.

Jewell, president and co-founder of WildTrack, has a B.Sc. in Zoology/Physiology, an M.Sc in Medical Parasitology from the London School of Tropical Medicine and Hygiene and a veterinary medical degree from Cambridge University. She has long sought to find less invasive ways to track, and protect, endangered species, such as the African black rhino.

In addition to Jewell, the paper’s authors include conservation biology and data science specialists at UC Berkeley, the University of Göttingen in Germany, Namibia’s Kuzikus Wildlife Reserve and Duke University.

The stakes are high.

African megafauna have become icons, even as global biodiversity declines.

“Only 5,500 black rhinos stand between this magnificent species, which preceded humans on earth by millions of years, and extinction,” Jewell says.

That’s made them bigger targets for poachers, who sell rhino horns and elephant tusks for huge sums, the paper’s authors report. Rhino horns, for example, reportedly go for as much as $65,000 per kilogram.

To disrupt poaching, wildlife managers must deploy effective protection measures.

This, in turn, depends on getting reliable data fast.

The challenge: many current monitoring technologies are invasive, expensive or impractical.

Satellite monitoring is a potential tool for the biggest animals — such as elephants. But detecting smaller species requires higher resolution imaging.

And the traditional practice of capturing rhinos, attaching a radio collar to the animals and then releasing them can be stressful for humans and rhinos.

Above it All: Observing rhinos from above leaves the animals undisturbed, while letting friendly humans know of any threats IMAGE CREDIT: WildTrack.

It’s even been found to depress the fertility of captured rhinos.

High-flying drones are already being used to study wildlife unobtrusively.

But rhinos most often live in areas with poor wireless networks, so drones can’t stream images back in real-time.

As a result, images have to be downloaded when drones return to researchers, who then have to comb through images looking to identify the beasts.

Identifying rhinos instantly onboard a drone and alerting authorities before it lands would ensure a speedy response to poachers.

“You can get a notification out and deploy units to where those animals are straight away,” Jewell said. “You could even protect these animals at night using heat signatures.”

To do this, the paper’s authors propose using an NVIDIA Jetson Xavier NX module onboard a Parrot Anafi drone.

The drone can connect to the relatively poor-quality wireless networks available in areas where rhinos live and deliver notifications whenever the target species are spotted.

To build the drone’s AI, the researchers used a YOLOv5l6 object-detection architecture. They trained it to identify a bounding box for one of five objects of interest in a video frame.

Most of the images used for training were gathered in Namibia’s Kuzikus Wildlife Reserve, an area of roughly 100 square kilometers on the edge of the Kalahari desert.

Mother knows beast (mode): African black rhinos are known to be protective of their young. IMAGE CREDIT: WildTrack.

With tourists gone, Jewell reports that her colleagues in Namibia had plenty of time to gather training images for the AI.

The researchers used several technologies to optimize performance and overcome the challenge of small animals in the data.

These techniques included images of other species in the AI’s training data, emulating field conditions with many animals.

They used data augmentation techniques, such as generative adversarial networks, to train the AI on synthetic data, the paper’s authors wrote.

And they also trained the model on a dataset with many kinds of terrain and images taken from different angles and lighting conditions.

Looking at footage of rhinos gathered in the wild, the AI correctly identified black rhinos — the study’s primary target — 81 percent of the time and giraffes 83 percent of the time, they reported.

The next step: putting this system to work in the wild, where wildlife conversationalists are already deploying everything from cameras to radio collars to track rhinos.

Many of the techniques combine the latest technology with ancient practices.

Jewell and WildTrack co-founder Sky Alibhai have already created a system, FIT, that uses sophisticated new techniques to analyze animal tracks (see image of a rhino track, left). The software, initially developed using morphometrics — or the quantitative analysis of an animal’s form — on JMP statistical analysis software, now uses the latest AI techniques.

Jewell says that modern science and the ancient art of tracking are much more alike than you might think.

“’When you follow a footprint, you’re really recreating the origins of science that shaped humanity,” Jewell said. “You’re deciding who made that footprint, and you’re following a trail to see if you’re correct.”

Jewell and her colleagues are now working to take their work another step forward, to use drones to identify rhino trails in the environment.

“Without even seeing them on the ground we’ll be able to create a map of where they’re going and interacting with each other to help us understand how to best protect them,” Jewell says.

All Images courtesy of WildTrack

 

 

 

The post An AI-Enabled Drone Could Soon Become Every Rhino Poacher’s… Horn Enemy appeared first on NVIDIA Blog.

Read More

NVIDIA to Share New Details on Grace CPU, Hopper GPU, NVLink Switch, Jetson Orin Module at Hot Chips

In four talks over two days, senior NVIDIA engineers will describe innovations in accelerated computing for modern data centers and systems at the edge of the network.

Speaking at a virtual Hot Chips event, an annual gathering of processor and system architects, they’ll disclose performance numbers and other technical details for NVIDIA’s first server CPU, the Hopper GPU, the latest version of the NVSwitch interconnect chip and the NVIDIA Jetson Orin system on module (SoM).

The presentations provide fresh insights on how the NVIDIA platform will hit new levels of performance, efficiency, scale and security.

Specifically, the talks demonstrate a design philosophy of innovating across the full stack of chips, systems and software where GPUs, CPUs and DPUs act as peer processors. Together they create a platform that’s already running AI, data analytics and high performance computing jobs inside cloud service providers, supercomputing centers, corporate data centers and autonomous systems.

Inside NVIDIA’s First Server CPU

Data centers require flexible clusters of CPUs, GPUs and other accelerators sharing massive pools of memory to deliver the energy-efficient performance today’s workloads demand.

To meet that need, Jonathon Evans, a distinguished engineer and 15-year veteran at NVIDIA, will describe the NVIDIA NVLink-C2C. It connects CPUs and GPUs at 900 gigabytes per second with 5x the energy efficiency of the existing PCIe Gen 5 standard, thanks to data transfers that consume just 1.3 picojoules per bit.

NVLink-C2C connects two CPU chips to create the NVIDIA Grace CPU with 144 Arm Neoverse cores. It’s a processor built to solve the world’s largest computing problems.

For maximum efficiency, the Grace CPU uses LPDDR5X memory. It enables a terabyte per second of memory bandwidth while keeping power consumption for the entire complex to 500 watts.

One Link, Many Uses

NVLink-C2C also links Grace CPU and Hopper GPU chips as memory-sharing peers in the NVIDIA Grace Hopper Superchip, delivering maximum acceleration for performance-hungry jobs such as AI training.

Anyone can build custom chiplets using NVLink-C2C to coherently connect to NVIDIA GPUs, CPUs, DPUs and SoCs, expanding this new class of integrated products. The interconnect will support AMBA CHI and CXL protocols used by Arm and x86 processors, respectively.

Memory benchmarks for Grace and Grace Hopper
First memory benchmarks for Grace and Grace Hopper.

To scale at the system level, the new NVIDIA NVSwitch connects multiple servers into one AI supercomputer. It uses NVLink, interconnects running at 900 gigabytes per second, more than 7x the bandwidth of PCIe Gen 5.

NVSwitch lets users link 32 NVIDIA DGX H100 systems into an AI supercomputer that delivers an exaflop of peak AI performance.

Alexander Ishii and Ryan Wells, both veteran NVIDIA engineers, will describe how the switch lets users build systems with up to 256 GPUs to tackle demanding workloads like training AI models that have more than 1 trillion parameters.

The switch includes engines that speed data transfers using the NVIDIA Scalable Hierarchical Aggregation Reduction Protocol. SHARP is an in-network computing capability that debuted on NVIDIA Quantum InfiniBand networks. It can double data throughput on communications-intensive AI applications.

NVSwitch systems enable exaflop-class AI
NVSwitch systems enable exaflop-class AI supercomputers.

Jack Choquette, a senior distinguished engineer with 14 years at the company, will provide a detailed tour of the NVIDIA H100 Tensor Core GPU, aka Hopper.

In addition to using the new interconnects to scale to unprecedented heights, it packs many advanced features that boost the accelerator’s performance, efficiency and security.

Hopper’s new Transformer Engine and upgraded Tensor Cores deliver a 30x speedup compared to the prior generation on AI inference with the world’s largest neural network models. And it employs the world’s first HBM3 memory system to deliver a whopping 3 terabytes of memory bandwidth, NVIDIA’s biggest generational increase ever.

Among other new features:

Choquette, one of the lead chip designers on the Nintendo64 console early in his career, will also describe parallel computing techniques underlying some of Hopper’s advances.

Michael Ditty, an architecture manager with a 17-year tenure at the company, will provide new performance specs for NVIDIA Jetson AGX Orin, an engine for edge AI, robotics and advanced autonomous machines.

It integrates 12 Arm Cortex-A78 cores and an NVIDIA Ampere architecture GPU to deliver up to 275 trillion operations per second on AI inference jobs. That’s up to 8x greater performance at 2.3x higher energy efficiency than the prior generation.

The latest production module packs up to 32 gigabytes of memory and is part of a compatible family that scales down to pocket-sized 5W Jetson Nano developer kits.

Performance benchmarks for NVIDIA Orin
Performance benchmarks for NVIDIA Orin

All the new chips support the NVIDIA software stack that accelerates more than 700 applications and is used by 2.5 million developers.

Based on the CUDA programming model, it includes dozens of NVIDIA SDKs for vertical markets like automotive (DRIVE) and healthcare (Clara), as well as technologies such as recommendation systems (Merlin) and conversational AI (Riva).

The NVIDIA AI platform is available from every major cloud service and system maker.

The post NVIDIA to Share New Details on Grace CPU, Hopper GPU, NVLink Switch, Jetson Orin Module at Hot Chips appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Startup in3D Turns Selfies Into Talking, Dancing Avatars With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Imagine taking a selfie and using it to get a moving, talking, customizable 3D avatar of yourself in just seconds.

A new extension for NVIDIA Omniverse, a design collaboration and world simulation platform, enables just that.

Created by developers at software startup in3D, the extension lets people instantly import 3D avatars of themselves into virtual environments using their smartphones. Omniverse Extensions are the core building blocks that let anyone create and extend functions of Omniverse Apps.

The in3D app can now bring people, in their digital forms, into Omniverse. It helps creators build engaging virtual worlds and use these avatars as heroes, actors or spectators in their stories. The app works on any phone with a camera, recreating a user’s full geometry and texture based on a video selfie.

The avatars can even be added into 3D worlds with animations and a customizable wardrobe.

In3D is a member of NVIDIA Inception, a free, global program that nurtures cutting-edge startups.

Simple and Scalable Avatar Creation

Creating a photorealistic 3D avatar has traditionally taken up to several months, with costs reaching up to tens of thousands of dollars. Photogrammetry, a standard approach to creating 3D references of humans from images, is extremely costly, requires a digital studio and lacks scalability.

With in3D, the process of creating 3D avatars is simple and scalable. The app understands the geometry, texture, depth and various vectors of a person via a mobile scan — and uses this information to replicate lifelike detail and create predictive animations for avatars.

Dmitry Ulyanov, CEO of in3D, which is based in Tel Aviv, Israel, said the app captures even small details with centimeter-grade accuracy and automatically fixes lighting. This allows for precise head geometry from a single selfie, as well as estimation of a user’s exact body shape.

For creators building 3D worlds, in3D software can save countless hours, increase productivity and result in substantial cost savings, Ulyanov said.

“Manually creating one avatar can take up to months,” he added. “With in3D’s scanning app and software development kit, a user can scan and upload 21,000 people with a single GPU and mobile phone in the same amount of time.”

Connecting to Omniverse

Ulyanov said that using in3D’s extension with NVIDIA Omniverse Avatar Cloud Engine (ACE) opens up many possibilities for avatar building, as users can easily customize imported avatars from in3D to engage and interact with their virtual worlds — in real time and at scale.

In3D uses Universal Scene Description (USD), an open-source, extensible file format, to seamlessly integrate its high-fidelity avatars into Omniverse. All avatar data is contained in a USD file, removing the need for complex shaders or embeddings. And bringing the avatars into Omniverse only requires a simple drag and drop.

Once imported into Omniverse via USD, the avatars can be used in apps like Omniverse Create and Audio2Face. Users have a complete toolset within Omniverse to support holistic content creation, whether animating avatars’ bodies with the retargeting tool or crafting their facial expressions with Audio2Face.

To build the Omniverse Extension, in3D used Omniverse Kit and followed the development flow using the VSCode computer program. Being able to put a breakpoint anywhere in the code made VSCode an easy-to-use, convenient, out-of-the-box solution for connecting in3D to Omniverse, Ulyanov said.

“The ability to centralize our SDK alongside other software for 3D developers is game changing,” he said. “With our Omniverse Extension now available, we’re looking to expand the base of developers who use our avatars.”

“Having the ability to upload our SDK and connect it with all the tools that 3D developers use has made in3D a tangible solution to deploy across all 3D development environments,” said Sergei Sherman, chief marketing officer at in3D. “This was something we wouldn’t have been able to achieve on our own in such a short amount of time.”

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Learn how to connect and create virtual worlds with Omniverse at NVIDIA GTC, the design and simulation conference for the era of AI and the metaverse, running online Sept. 19-22. Registration is free and offers access to dozens of sessions and special events.

Developers can use Omniverse Code to create their own Omniverse Extension for the inaugural #ExtendOmniverse contest by Friday, Sept. 9, at 5 p.m. PT, for a chance to win an NVIDIA RTX GPU. The winners will be announced in the NVIDIA Omniverse User Group at GTC.

Find additional documentation and tutorials in the Omniverse Resource Center, which details how developers like Ulyanov can build custom USD-based applications and extensions for the platform.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Startup in3D Turns Selfies Into Talking, Dancing Avatars With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Startup Digs Into Public Filings With GPU-Driven Machine Learning to Serve Up Alternative Financial Data Services

When Rachel Carpenter and Joseph French founded Intrinio a decade ago, the fintech revolution had only just begun. But they saw an opportunity to apply machine learning to vast amounts of financial filings to create an alternative data provider among the giants.

The startup, based in St. Petersburg, Fla., delivers financial data to hedge funds, proprietary trading shops, retail brokers, fintech developers and others. Intrinio runs machine learning on AWS instances of NVIDIA GPUs to parse mountains of publicly available financial data.

Carpenter and French realized early that such data was sold for a premium, and that machine learning offered a way to sort through free financial filings to deliver new products.

The company offers information on equities, options, estimates and ETFs — as well as environmental, social and governance data. Its most popular product is equities-fundamentals data.

Intrinio has taken an unbundling approach to traditional product offerings, creating à la carte data services now used in some 450 fintech applications.

“GPUs have helped us unlock data that is otherwise expensive and sourced manually,” said Carpenter, the company’s CEO. “We built a lot of technology with the idea that we wanted to unlock data for innovators in the financial services space.”

Intrinio is a member of NVIDIA Inception, a free, global program designed to support cutting-edge startups.

Partnering With Fintechs

With lower overhead enabled by GPU-driven machine learning for providing financial data, Intrinio has been able to deliver products at lower prices that appeal to startups.

“We have a much smaller and agile team, because a small team — in conjunction with NVIDIA GPUs, TensorFlow, PyTorch and everything else that we’re using — makes our work a lot more automated,” she said.

Its clients include fintech players like Robinhood, FTX, Domain Money, MarketBeat and Alpaca. Another, Aiera, transcribes earnings calls live with its own automated-speech-recognition models driven by NVIDIA GPUs, and relies on Intrinio for financial data.

“Our use of GPUs made our data packages affordable and easy to use for Aiera, so the company is integrating Intrinio financial data into its platform,” said Carpenter.

Aiera needed financial-data-cleansing services for consistent information on company earnings and more. Harnessing Intrinio’s application programming interface, Aiera can access normalized, split-second company financial data.

“GPUs are a critical component of Intrinio’s underlying technology — without them, we wouldn’t have been able to apply machine learning techniques to the cleansing and standardization of fundamental and financial statement data,” said Carpenter.

Servicing Equities, Options, ESG 

For equities pricing, Intrinio’s machine learning technology can sort out pricing discrepancies in milliseconds. This results in substantially higher data quality and reliability for users, according to Carpenter. With equity fundamentals, Intrinio automates several key processes, such as entity recognition. Intrinio uses machine learning to identify company names or other key information from unstructured text to ensure the correct categorization of data.

In other cases, Intrinio applies machine learning to reconcile line items from financial statements into standardized buckets so that, for example, you can compare revenue across companies cleanly.

The use of GPUs and machine learning in both of these cases results in higher quality data than a manually-oriented approach. Using Intrinio has shown to decrease by 88% the number of errors requiring corrections compared with manual sorting, according to the company.

For options, Intrinio takes the raw Options Price Reporting Authority (OPRA) feed and applies cutting-edge filtering, algorithms and server architecture to provide its options API

ESG data is also an area of interest for investors right now. As retail investors are starting to be more conscious of the environment and institutions are feeling the pressure to invest responsibly, they want to see how companies stack up with this information.

As regulation around ESG disclosures solidifies, Intrinio says it will be able to use its automated XBRL-standardization technology to unlock these data sets for their users. XBRL is a standardized format of digital information exchange for business.

“On the retail side, app developers need to show this information to their users because people want to see it — making that data accessible is critical to the evolution of the financial industry,” said Carpenter.

Register free for GTC, running online Sept. 19-22, to attend sessions with NVIDIA and dozens of industry leaders. View the financial services agenda for the conference. 

Image credit: Luca Bravo from Unsplash

The post Startup Digs Into Public Filings With GPU-Driven Machine Learning to Serve Up Alternative Financial Data Services appeared first on NVIDIA Blog.

Read More

Boldly Go: Discover New Frontiers in AI-Powered Transportation at GTC

AI and the metaverse are revolutionizing every aspect of the way we live, work and play — including how we move.

Leaders in the automotive and technology industries will come together at NVIDIA GTC to discuss the newest breakthroughs driving intelligent vehicles, whether in the real world or in simulation.

The virtual conference, which runs from Sept. 19-22, will feature a slate of in-depth sessions on end-to-end software-defined vehicle development, as well as advances in robotics, healthcare, high performance computing and more. And it’s all free to attend.

Headlining GTC is NVIDIA founder and CEO Jensen Huang, who will present the latest in AI and NVIDIA Omniverse in the keynote address on Tuesday, Sept. 20, at 8 a.m. PT.

Conference attendees will have plenty of networking opportunities, and they can learn from NVIDIA experts and industry luminaries about AV development, from the cloud to the car.

Here’s a brief look at what to expect during GTC:

Meet the Trailblazers

Every stage of the automotive pipeline is being transformed by AI and metaverse technologies, from manufacturing and design, to autonomous driving, to the passenger experience.

Speakers from each of these areas will share how they’re harnessing AI innovations to accelerate software-defined transportation.

Automotive sessions include:

  • Michael Bell, senior vice president of Digital at Lucid Motors, walks through the development of the Lucid Dream Drive Pro advanced driver assistance system, and how the company continuously deploys new features for a cutting-edge driving experience.
  • Yuli Bai, head of AI Platform at NIO, outlines the AI infrastructure that the automaker is using to develop intelligent, software-defined vehicles running on the NVIDIA DRIVE Orin compute platform.
  • Apeksha Kumavat, chief engineer and co-founder at Gatik, explains how its autonomous commercial-delivery vehicles are helping the retail industry adapt to rapidly changing consumer demands.
  • Dennis Nobelius, chief operating officer at Polestar, describes how the performance electric vehicle maker is developing AI-powered features geared toward the human driver, while prioritizing long-term environmental sustainability.

Don’t miss additional sessions from BMW, Mercedes-Benz and Waabi covering manufacturing, AI research and more.

Get the Inside Track on DRIVE Development

Learn about the latest NVIDIA DRIVE technologies directly from the minds behind their creation.

NVIDIA DRIVE Developer Day consists of a series of deep-dive sessions on building safe and robust autonomous vehicles. Led by the NVIDIA engineering team, the talks will highlight the newest DRIVE features and discuss how to apply them to AV development.

Topics include:

  • NVIDIA DRIVE product roadmap
  • Intelligent in-vehicle infotainment
  • Data center development
  • Synthetic data generation for testing and validation

All of this virtual content is available to GTC attendees — register for free today to see the technologies shaping the intelligent future of transportation.

The post Boldly Go: Discover New Frontiers in AI-Powered Transportation at GTC appeared first on NVIDIA Blog.

Read More

Startup’s Vision AI Software Trains Itself — in One Hour — to Detect Manufacturing Defects in Real Time

Cameras have been deployed in factories for over a decade — so why, Franz Tschimben wondered, hasn’t automated visual inspection yet become the worldwide standard?

This question motivated Tschimben and his colleagues to found Covision Quality, an AI-based visual-inspection software startup that uses NVIDIA technology to transform end-of-line defect detection for the manufacturing industry.

“The simple answer is that these systems are hard to scale,” said Tschimben, the northern Italy-based company’s CEO. “Material defects, like burrs, holes or scratches, have varying geometric shapes and colors that make identifying them cumbersome. That meant quality-control specialists had to program inspection systems by hand to fine-tune their defect parameters.”

Covision’s software allows users to train AI models for visual inspection without needing to code. It quadruples accuracy for defect detection and reduces false-negative rates by up to 90% compared with traditional rule-based methods, according to Tschimben.

The software relies on unsupervised machine learning that’s trained on NVIDIA RTX A5000 GPUs. This technique allows the AI in just one hour to teach itself, based on hundreds of example images, what qualifies as a defect for a specific customer. It removes the extensive labeling of thousands of images that’s typically required for a supervised learning pipeline.

The startup is a member of NVIDIA Metropolis — a partner ecosystem centered on vision AI that includes a suite of GPU-accelerated software development kits, pretrained models and the TAO toolkit to supercharge a range of automation applications. Covision is also part of NVIDIA Inception, a free, global program that nurtures cutting-edge startups.

In June, Covision was chosen from hundreds of emerging companies as the winner of a startup award at Automate, a flagship conference on all things automation.

Reducing Pseudo-Scrap Rates

In manufacturing, the pseudo-scrap rate — or the frequency at which products are falsely identified as defective — is a key indicator of a visual-inspection system’s efficiency.

Covision’s software, which is hardware agnostic, reduces pseudo-scrap rates by up to 90%, according to Tschimben.

As an item passes through a production line, a camera captures an image of it. Then, Covision’s real-time AI model analyzes it. Finally, it sends the information to a simple user interface that displays image frames: green for good pieces and red for defective ones.

For GKN Powder Metallurgy, a global producer of 13 million metal parts each day, the above steps can occur in as quick as 200 milliseconds per piece — enabled by Covision software and NVIDIA GPUs deployed at the production line.

Two to six cameras usually inspect one production line at a factory, Tschimben said. And one NVIDIA A5000 GPU on premises can process the images from four production lines in real time.

“NVIDIA GPUs are robust and reliable,” he added. “The TensorRT SDK and CUDA toolkit enable our developers to use the latest resources to build our platform, and the Metropolis program helps us with go-to-market strategy — NVIDIA is a one-stop solution for us.”

Plus, being an Inception member gives Covision access to free credits for NVIDIA Deep Learning Institute courses, which Tschimben said are “very helpful hands-on resources” for the company’s engineers to stay up to date on the latest NVIDIA tech.

Increasing Efficiency, Sustainability in Industrial Production

In addition to identifying defective pieces at production lines, Covision software offers a management panel that displays AI-based data analyses of improvements in a production site’s quality of outputs over time — and more.

“It can show, for example, which site out of a company’s many across the world is producing the best metal pieces with the highest production-line uptime, or which production line within a factory needs attention at a given moment,” Tschimben said.

This feature can help managers make high-level decisions to optimize factory efficiency, globally.

“There’s also a sustainability factor,” Tschimben said. “Companies want to reduce waste. Our software reduces production inefficiencies, increasing sustainability and making the work more streamlined.”

Reducing pseudo-scrap rates using Covision software means that companies can produce materials at higher efficiency and profitability levels, and ultimately waste less.

Covision software is deployed at production sites across the U.S. and Europe for customers including Alupress Group and Aluflexpack, in addition to GKN Powder Metallurgy.

Learn more about NVIDIA Metropolis and apply to join NVIDIA Inception.

Attend NVIDIA GTC, running online Sept.19-22, to discover how vision AI and other groundbreaking technologies are shaping the world.

The post Startup’s Vision AI Software Trains Itself — in One Hour — to Detect Manufacturing Defects in Real Time appeared first on NVIDIA Blog.

Read More

Easy A: GeForce NOW Brings Higher Resolution and Frame Rates for Browser Streaming on PC

Class is in session this GFN Thursday as GeForce NOW makes the up-grade with support for higher resolutions and frame rates in Chrome browser on PC. It’s the easiest way to spice up a boring study session.

When the lecture is over, dive into the six games joining the GeForce NOW library this week, where new adventure always awaits.

The Perfect Study Break

All work and no play isn’t the GeForce NOW way. No one should be away from their games, even if they’re going back to school. GeForce NOW streams the best PC games across nearly all devices, including low-powered PCs with a Chrome or Edge browser.

1440p Gameplay in Chrome Browser on GeForce NOW
Enabling 1440p 120 FPS for browser streaming is easy: Visit “Settings,” then select “Custom” streaming quality to adjust the resolution and frame rate settings.

RTX 3080 members can now level up their browser gameplay at up to 1440p and 120 frames per second. No app install is required — just open a Chrome or Edge browser on PC, go to play.geforcenow.com, select these new resolutions and refresh rates from the GeForce NOW Settings menu, and jump into games in seconds, with less friction or downloads.

It’s never been easier to explore the more than 1,300 titles in the GeForce NOW library. Have some downtime during lab work? Sneak in a round of Apex Legends. Need a break from a boring textbook? Take a trip to Tevyat in Genshin Impact.

Stay connected with friends for multiplayer — like in Path of Exile’s latest expansion, “Lake of Kalandra” — so even if making your next moves at different schools, the squad can stick together to get into the gaming action.

Here’s Your Homework

Thymesia on GeForce NOW
Save a kingdom fallen to an age of calamity in ‘Thymesia,’ a grueling action-RPG with fast-paced combat.

Pop quiz: What’s the best part of GFN Thursday?

Answer: More games, of course. You all get an A+.

Buckle up for 6 new releases this week:

Finally, for a little extra credit, we’ve got a question for you. Share your answers on Twitter or in the comments below.

The post Easy A: GeForce NOW Brings Higher Resolution and Frame Rates for Browser Streaming on PC appeared first on NVIDIA Blog.

Read More

Immunai Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs

Mapping the immune system could lead to the creation of drugs that help our bodies win the fight against cancer and other diseases. That’s the big idea behind immunotherapy. The problem: the immune system is incredibly complex.

Enter Immunai, a biotech company that’s using cutting-edge genomics & ML technology to map the human immune system and develop new immunotherapies against cancer and autoimmune diseases.

On this episode of NVIDIA’s AI Podcast, host Noah Kravitz spoke with Luis Voloch, co-founder and CTO of Immunai, about tackling the challenges of the immune system with a machine learning and data science mindset.

You Might Also Like

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor. However, there is a dearth of recent research reviewing how accelerated computing can impact the process. Professors Artem Cherkasov and Olexandr Isayev discuss how GPUs can help democratize drug discovery.

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

Wild Things: 3D Reconstructions of Endangered Species with NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post Immunai Co-Founder Luis Voloch on Using Deep Learning to Develop New Drugs appeared first on NVIDIA Blog.

Read More

Smart Devices, Smart Manufacturing: Pegatron Taps AI, Digital Twins

In the fast-paced field of making the world’s tech devices, Pegatron Corp. initially harnessed AI to gain an edge. Now, it’s on the cusp of creating digital twins to further streamline its efficiency.

Whether or not they’re familiar with the name, most people have probably used smartphones, tablets, Wi-Fi routers or other products that Taiwan-based Pegatron makes in nearly a dozen factories across seven countries. Last year, it made more than 10 million notebook computers.

Andrew Hsiao, associate vice president of Pegatron’s software R&D division, is leading the company’s move into machine learning and the 3D internet known as the metaverse.

Building an AI Platform

“We’ve been collecting factory data since 2012 to find patterns and insights that enhance operations,” said Hsiao, a veteran tech manager who’s been with the company for 14 years, since it spun out of ASUS, one of the world’s largest PC makers.

In 2016, Pegatron’s COO, Denese Yao, launched a task force to apply new technology to improve operations. Hsiao’s team of AI experts collaborated with factory workers to find use cases for AI. One of their first pilot projects used deep learning to detect anomalies in products as they came down the line.

It got solid results using modified versions of neural network models like ResNet, so they stepped on the gas.

Today, Pegatron uses Cambrian, an AI platform it built for automated inspection, deployed in most of its factories. It maintains hundreds of AI models, trained and running in production on NVIDIA GPUs.

Fewer Defects, More Consistency

The new platform catches up to 60% more defects with 30% fewer variations than human inspectors, and factory employees appreciate it.

“Manual inspection is a boring, repetitive job, so it’s not surprising employees don’t like it,” he said. “Now, we’re seeing employees motivated to learn about the new technology, so it’s empowering people to do more value-added work.”

The system may also improve throughput as factories adjust workflows on assembly and packing stations to account for faster inspection lines.

Models Deployed 50x Faster

Pegatron’s system uses NVIDIA A100 Tensor Core GPUs to deploy AI models up to 50x faster than when it trained them on workstations, cutting weeks of work down to a few hours.

“With our unified platform based on DGX, we have our data lake, datasets and training all in one place, so we can deploy a model in one click,” Hsiao said.

Using the Multi-Instance GPU capability in A100 GPUs, Pegatron cut developers’ wait time for access to an accelerator from nearly an hour to 30 seconds. “That lets us dynamically schedule jobs like AI inference and lightweight model training,” he said.

As part of its AI inference work, the system analyzes more than 10 million images a day using NVIDIA A40 and other GPUs.

Triton, NGC Simplify AI Jobs

Pegatron uses NVIDIA Triton Inference Server, open-source software that helps deploy, run and scale AI models across all types of processors, and frameworks. It works hand-in-hand with NVIDIA TensorRT, software that simplifies neural networks to reduce latency.

“Triton and TensorRT make it easy to serve multiple clients and convert jobs to the most cost-effective precision levels,” he said.

Hsiao’s team optimizes pretrained AI models it downloads in integrated Kubernetes containers from the NVIDIA NGC hub for GPU-optimized software.

“NGC is very helpful because we get with one click the deep learning frameworks and all the other software components we need, stuff that used to take us a lot of time to pull together,” he said.

Next Step: Digital Twins

Taking another step in smarter manufacturing, Pegatron is piloting NVIDIA Omniverse, a platform for developing digital twins

It has two use cases so far. First, testing Omniverse Replicator to generate synthetic data of what products coming down the inspection line might look like under different lighting conditions or orientations. This information will make its perception models smarter.

Second, it’s creating digital twins of inspection machines. That lets remote workers manage them remotely, have better insight into predictive maintenance and simulate software updates before deploying them to a physical machine.

“Today, when a system goes down, we can only check logs that might be incomplete, but with Omniverse, we can replay what happened to understand how to fix it, or, run simulations to predict how it will behave in the future,” he said.

Pegatron engineer monitors factory remotely with Omniverse
A Pegatron engineer monitors an inspection machine remotely with Omniverse.

What’s more, industrial engineers who care about throughput, automation engineers responsible for downtime, and equipment engineers who handle maintenance can work together on the same virtual system at the same time, even when logging in from different countries.

Vision of a Virtual Factory

If all goes well, Pegatron could have Omniverse available on its inspection machines before the end of the year.

Meanwhile, Hsiao is looking for partners who can help build virtual versions of a whole production line in Omniverse. Longer term, his vision is to create a digital twin of an entire factory.

“In my opinion, the greatest impact will come from building a full virtual factory so we can try out things like new ways to route products through the plant,” he said. “When you just build it out without a simulation first, your mistakes are very costly.”

The post Smart Devices, Smart Manufacturing: Pegatron Taps AI, Digital Twins appeared first on NVIDIA Blog.

Read More

AI Shows the Way: Seoul Robotics Helps Cars Move, Park on Their Own

Imagine driving a car — one without self-driving capabilities — to a mall, airport or parking garage, and using an app to have the car drive off to park itself.

Software company Seoul Robotics is using NVIDIA technology to make this possible — turning non-autonomous cars into self-driving vehicles.

Headquartered in Korea, the company’s initial focus is on improving first- and last-mile logistics such as parking. Its Level 5 Control Tower is a mesh network of sensors and computers placed on infrastructure around a facility, like buildings or light poles — rather than on individual cars — to capture an unobstructed view of the environment.

The system enables cars to move autonomously by directing their vehicle-to-everything, or so-called V2X, communication systems. These systems pass information from a vehicle to infrastructure, other vehicles, any surrounding entities — and vice versa. V2X technology, which comes standard in many modern cars, is used to improve road safety, traffic efficiency and energy savings.

Seoul Robotics’ platform, dubbed LV5 CTRL TWR, collects 3D data from the environment using cameras and lidar. Computer vision and deep learning-based AI analyze the data, determining the most efficient and safest paths for vehicles within the covered area.

Then, through V2X, the platform can manage a car’s existing features, such as adaptive-cruise-control, lane-keeping and brake-assist functions, to safely get it from place to place.

LV5 CTRL TWR is built using NVIDIA CUDA libraries for creating GPU-accelerated applications, as well as the Jetson AGX Orin module for high-performance AI at the edge. NVIDIA GPUs are used in the cloud for global fleet path planning.

Seoul Robotics is a member of NVIDIA Metropolis — a partner program centered on an application framework and set of developer tools that supercharge vision AI applications — and NVIDIA Inception, a free, global program that nurtures cutting-edge startups.

Autonomy Through Infrastructure

Seoul Robotics is pioneering a new path to level 5 autonomy, or full driving automation, with what’s known as “autonomy through infrastructure.”

“Instead of outfitting the vehicles themselves with sensors, we’re outfitting the surrounding infrastructure with sensors,” said Jerone Floor, vice president of product and solutions at Seoul Robotics.

Using V2X capabilities, LV5 CTRL TWR sends commands from infrastructure to cars, making vehicles turn right or left, move from point A to B, brake and more. It achieves an accuracy in positioning a car of plus or minus four centimeters.

“No matter how smart a vehicle is, if another car is coming from around a corner, for example, it won’t be able to see it,” Floor said. “LV5 CTRL TWR provides vehicles with the last bits of information gathered from having a holistic view of the environment, so they’re never ‘blind.’”

These communication protocols already exist in most vehicles, he added. LV5 CTRL TWR acts as the AI-powered brain of the instructive mechanisms, requiring nothing more than a firmware update in cars.

“From the beginning, we knew we needed deep learning in the system in order to achieve the really high performance required to reach safety goals — and for that, we needed GPU acceleration,” Floor said. “So, we designed the system from the ground up based on NVIDIA GPUs and CUDA.”

NVIDIA CUDA libraries help the Seoul Robotics team render massive amounts of data from the 3D sensors in real time, as well as accelerate training and inference for its deep learning models.

As a Metropolis member, Seoul Robotics received early access to software development kits and the NVIDIA Jetson AGX Orin for edge AI.

“The compute capabilities of Jetson AGX Orin allow us to have the LV5 CTRL TWR cover more area with a single module,” Floor added. “Plus, it handles a wide temperature range, enabling our system to work in both indoor and outdoor units, rain or shine.”

Deployment Across the Globe

LV5 CTRL TWR is in early commercial deployment at a BMW manufacturing facility in Munich.

According to Floor, cars must often change locations once they’re manufactured, from electrical repair stations to parking lots for test driving and more.

Equipped with LV5 CTRL TWR, the BMW facility has automated such movement of cars — resulting in time and cost savings. Automating car transfers also enhances safety for employees and frees them up to focus on other tasks, like headlight alignment and more, Floor said.

And from the moment a vehicle is fully manufactured until it’s delivered to the customer, it moves on average through up to seven parking lots. Moving cars manually costs manufacturers anywhere from $30 to $60, per car, per lot — meaning LV5 CTRL TWR can address a $30 billion market.

The technology behind LV5 CTRL TWR can be used across industries, Floor highlighted. Beyond automotive factories, Seoul Robotics envisions its platform to be deployed across the globe — at retail stores, airports, traffic intersections and more.

NVIDIA Jetson AGX Orin 32GB production modules are now available.

Learn more about NVIDIA Metropolis and apply to join NVIDIA Inception.

Feature image courtesy of BMW Group.

The post AI Shows the Way: Seoul Robotics Helps Cars Move, Park on Their Own appeared first on NVIDIA Blog.

Read More