NVIDIA Celebrates 1 Million Jetson Developers Worldwide at GTC

NVIDIA Celebrates 1 Million Jetson Developers Worldwide at GTC

A million developers across the globe are now using the NVIDIA Jetson platform for edge AI and robotics to build innovative technologies. Plus, more than 6,000 companies — a third of which are startups — have integrated the platform with their products.

These milestones and more will be celebrated during the NVIDIA Jetson Edge AI Developer Days at GTC, a global conference for the era of AI and the metaverse, taking place online March 20-23.

Register free to learn more about the Jetson platform and begin developing the next generation of edge AI and robotics.

One in a Million

Atlanta-based Kris Kersey, the mind behind the popular YouTube channel Kersey Fabrications, is one developer using the NVIDIA Jetson platform for his one-in-a-million technological innovations.

He created a fully functional Iron Man helmet that could be straight out of the Marvel Comics films. It uses the NVIDIA Jetson Xavier NX 8GB developer kit as the core of the “Arc Reactor” powering its heads-up display — a transparent display that presents information wherever the user’s looking.

In just over two years, Kersey built from scratch the wearable helmet, complete with object detection and other on-screen sensors that would make Tony Stark proud.

“The software design was more than half the work on the project, and for me, this is the most exciting, interesting part,” Kersey said. “The software takes all of the discrete hardware components and makes them into a remarkable system.”

To get started, Kersey turned to GitHub where he found “Hello AI World,” a guide for deploying deep-learning inference networks and deep vision primitives with the NVIDIA TensorRT software development kit and NVIDIA Jetson. He then wrote a wrapper code to connect his own project.

Watch Kersey document his Iron Man project from start to finish:

This 3D-printed helmet is just the beginning for Kersey, who’s aiming to build a full Iron Man suit later this year. He plans to make the entire project’s code open source, so anyone who dreams of becoming a superhero can try it for themselves.

Jetson Edge AI Developer Days at GTC

Developers like Kersey can register for the free Jetson Edge AI Developer Days at GTC, which feature NVIDIA experts who’ll cover the latest Jetson hardware, software and partners. Sessions include:

  • Level Up Edge AI and Robotics With NVIDIA Jetson Orin Platform
  • Accelerate Edge AI With NVIDIA Jetson Software
  • Getting the Most Out of Your Jetson Orin Using NVIDIA Nsight Developer Tools
  • Bring Your Products to Market Faster With the NVIDIA Jetson Ecosystem
  • Design a Complex Architecture on NVIDIA Isaac ROS

Plus, there’ll be a Connect with Experts session focusing on the Jetson platform that provides a deep-dive Q&A with embedded platform engineers from NVIDIA on Tuesday, March 21, at 12 p.m. PT. This interactive session offers a unique opportunity to meet, in a group or individually, with the minds behind NVIDIA products and get your questions answered. Space is limited and on a first-come, first-served basis.

Additional Sessions by Category

GTC sessions will also cover robotics, intelligent video analytics and smart spaces. Below are some of the top sessions in these categories.

Robotics:

Computer Vision and AI Video Analytics:

Smart Cities and Spaces:

Check out the latest Jetson community projects for ideas to replicate or be inspired by.

Grab the latest Jetson modules and developer kits from the NVIDIA Jetson store.

And sign up for the NVIDIA Developer Program to connect with Jetson developers from around the world and get access to the latest software and software development kits, including NVIDIA JetPack.

Read More

Mercedes-Benz Taking Vehicle Product Lifecycle Digital With NVIDIA AI and Omniverse

Mercedes-Benz Taking Vehicle Product Lifecycle Digital With NVIDIA AI and Omniverse

To drive the automotive industry forward, NVIDIA and Mercedes-Benz are taking the virtual road.

NVIDIA founder and CEO Jensen Huang joined Mercedes-Benz CEO Ola Källenius on stage at the automaker’s strategy update event yesterday in Silicon Valley, showcasing progress in their landmark partnership to digitalize the entire product lifecycle, plus the ownership and automated driving experience.

The automotive industry is undergoing a massive transformation, which is driven by advancements in accelerated computing, AI and the industrial metaverse.

“Digitalization is streamlining every aspect of the automotive lifecycle: from styling and design, software development and engineering, manufacturing, simulation and safety testing, to customer buying and driving experiences,” said Huang.

Since its founding, Mercedes-Benz has set the bar in automotive innovation and ingenuity, backed by superior craftsmanship. The automaker is shaping the future with its intelligent and software-defined vehicles, which are powered by NVIDIA’s end-to-end solutions.

The Fleet of the Future

Next-generation Mercedes-Benz vehicles will be built on a revolutionary centralized computing architecture that includes sophisticated software and features that will turn these future vehicles into high-performance, perpetually upgradable supercomputers on wheels.

During the event, the automaker took the wraps off its new operating system, MB.OS, a purpose-built, chip-to-cloud architecture that will be standard across its entire vehicle portfolio — delivering exceptional software capabilities and ease of use.

MB.OS benefits from full access to all vehicle domains, including infotainment, automated driving, body and comfort, driving and charging — an approach that allows Mercedes-Benz customers a differentiated, superior product experience.

“MB.OS is a platform that connects all parts of our business,” Källenius noted during the event.

Safe Has Never Felt So Powerful

At the heart of this architecture is NVIDIA DRIVE Orin, which delivers high-performance, energy-efficient AI compute to support a comprehensive sensor suite and software to safely enable enhanced assisted driving and, ultimately, level 3 conditionally automated driving.

Running on DRIVE Orin is the flexible and scalable software stack jointly developed by NVIDIA and Mercedes-Benz. Sarah Tariq, NVIDIA vice president of autonomous driving software, joined Magnus Östberg, chief software officer at Mercedes-Benz, on stage to delve deeper into this full-stack software architecture, which includes the MB.OS, middleware and deep neural networks to enable advanced autonomy.

Tariq said, “The companies are working in close collaboration to develop a software stack that can comfortably and safely handle all the complexities that the automaker’s customers may encounter during day-to-day commutes all over the world.”

This includes enhanced level 2 features in urban environments where there are pedestrians or dense, complex traffic patterns. Using advanced AI, Mercedes-Benz can deliver a comfortable driving experience that consumers have come to expect, backed by uncompromised safety and security.

With the ability to perform 254 trillion operations per second, DRIVE Orin has ample compute headroom to continuously advance this software with new capabilities and subscription services over the life of the vehicle, through over-the-air software updates, via an app, web or from inside the car.

Additionally, Mercedes-Benz is accelerating the development of these systems with the high-fidelity NVIDIA DRIVE Sim platform, built on NVIDIA Omniverse. This cloud-native platform delivers physically based, scalable simulation for automakers to develop and test autonomous vehicle systems on a wide range of rare and hazardous scenarios.

Manufacturing in the Industrial Metaverse

This software-defined platform is just one piece of Mercedes-Benz intelligent vehicle strategy.

At CES last month, Mercedes-Benz previewed its first step in digitalization of its production process using NVIDIA Omniverse — a platform for building and operating metaverse applications — to plan and operate its manufacturing and assembly facilities.

With Omniverse, Mercedes-Benz can create an AI-enabled digital twin of the factory to review and optimize floor layouts, unlocking operational efficiencies. With enhanced predictive analysis, software and process automation, the digital twin can maximize productivity and help maintain faultless operation.

By implementing a digital-first approach to its operations, Mercedes-Benz can also ensure production activities won’t be disrupted as new models and architectures are introduced. And this blueprint can be deployed to other areas within the automaker’s global production network for scalable, more agile vehicle manufacturing.

Revolutionizing the Customer Experience

Digitalization is also improving the car-buying experience, migrating from physical retail showrooms to immersive online digital spaces.

With Omniverse, automakers can bridge the gap between the digital and physical worlds, making the online car-research experience more realistic and interactive. These tools include online car configurators, 3D visualizations of vehicles, demonstration of cars in augmented reality and virtual test drives.

Östberg summed up, “The partnership with NVIDIA is already living up to its promise, and the potential is huge.”

Read More

A New Window in the Cloud: NVIDIA and Microsoft to Bring Top PC Games to GeForce NOW

A New Window in the Cloud: NVIDIA and Microsoft to Bring Top PC Games to GeForce NOW

The cloud just got bigger. NVIDIA and Microsoft announced this week they’re working to bring top PC Xbox Game Studios games to the GeForce NOW library, including titles from Bethesda, Mojang Studios and Activision, pending closure of Microsoft’s acquisition.

With six new games joining the cloud this week for members to stream, it’s a jam-packed GFN Thursday.

Plus, Ultimate members can now access cloud-based RTX 4080-class servers in and around Paris, the latest city to light up on the update map. Keep checking GFN Thursday to see which RTX 4080 SuperPOD upgrade is completed next.

Game On

GeForce NOW Ultimate Superpods
GeForce NOW beyond fast gaming expands to Xbox PC Games.

NVIDIA and Microsoft’s 10-year deal to bring the Xbox PC game library to GeForce NOW is a major boost for cloud gaming and brings incredible choice to gamers. It’s the perfect bow to wrap up GeForce NOW’s anniversary month, expanding the over 1,500 titles available to stream.

Work to bring top Xbox PC game franchises and titles to GeForce NOW, such as Halo, Minecraft and Elder Scrolls, will begin immediately. Games from Activision like Call of Duty and Overwatch are on the horizon once Microsoft’s acquisition of Activision closes. GeForce NOW members will be able to stream these titles across their devices, with the flexibility to easily switch between underpowered PCs, Macs, Chromebooks, smartphones and more.

Xbox Game Studios PC games available on third-party stores, like Steam or Epic Games Store, will be among the first streamed through GeForce NOW. The partnership also marks the first games that will be available on the Windows Store, support for which will begin soon.

It’s an exciting time for all gamers, as the partnership will give people more choice and higher performance. Stay tuned to GFN Thursdays for news on the latest Microsoft titles coming to GeForce NOW.

Ready, Set, Action!

Son of the Forest on GeForce NOW
Find a way to survive alone or with a buddy.

A new week means new GFN Thursday games. Sons of the Forest, the highly anticipated sequel to The Forest from Endnight Games, places gamers on a cannibal-infested island after crash-landing. Survive alone or pair up online with a buddy online.

Earlier in the week, members started streaming Atomic Heart, the action role-playing game from Mundfish, day-and-date from the cloud. Check out the full list of new titles available to stream this week:

With the wrap up of GeForce NOW’s #3YearsOfGFN celebrations, members are sharing their winning GeForce NOW moments on Twitter and Facebook for a chance to win an MSI Ultrawide Gaming monitor — the perfect companion with an Ultimate membership. Join the conversation and add your own favorite moments.

Let us know in the comments or on GeForce NOW social channels what you’ll be streaming next.

Read More

New NVIDIA Studio Laptops Powered by GeForce RTX 4070, 4060, 4050 Laptop GPUs Boost On-the-Go Content Creation

New NVIDIA Studio Laptops Powered by GeForce RTX 4070, 4060, 4050 Laptop GPUs Boost On-the-Go Content Creation

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

Laptops equipped with NVIDIA GeForce RTX 4070, 4060 and 4050 GPUs are now available. The new lineup — including NVIDIA Studio-validated laptops from ASUS, GIGABYTE and Samsung — gives creators more options to create from anywhere with lighter, thinner devices that dramatically exceed the performance of the last generation.

These new GeForce RTX Laptop GPUs bring increased efficiency, thanks to the NVIDIA Ada Lovelace GPU architecture and fifth-generation Max-Q technology.

The laptops are fueled by powerful NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 popular creative apps; and exclusive NVIDIA Studio apps like Omniverse, Canvas and Broadcast. And when the creating ends to let the gaming begin, DLSS 3 technology doubles frame rates.

Plus, the making of 3D artist Shangyu Wang’s short film, called Most Precious Gift, is highlighted In the NVIDIA Studio this week. The film was staged in NVIDIA Omniverse, a platform for creating and operating metaverse applications.

And don’t forget to sign up for creator and Omniverse sessions, tutorials and more at NVIDIA GTC, a free, global conference for the era of AI and the metaverse running online March 20-23.

A GPU Class of Their Own 

The new Studio laptops, equipped with powerful GeForce RTX 4070, 4060 and 4050 Laptop GPUs and fifth-generation Max-Q technology, revolutionize content creation on the go.

These advancements enable extreme efficiencies that allow creators to get the best of both worlds: small size and high performance. The thinner, lighter, quieter laptops retain extraordinary performance — letting users complete complex creative tasks in a fraction of the time needed before.

GeForce RTX 4070 GPUs unlock advanced video editing and 3D rendering capabilities. Work in 6K RAW high-dynamic range video files with lightning-fast decoding, export in AV1 with the new eighth-generation encoder, and gain a nearly 40% performance boost over the previous generation with GPU-accelerated effects in Blackmagic Design’s DaVinci Resolve. Advanced 3D artists can tackle large projects with ease across essential 3D apps using new third-generation RT Cores.

The GeForce RTX 4060 GPU-class laptops equipped with 8GB of video memory are great for video editing and artists looking to get started in 3D modeling and animation. In the popular open-source 3D app Blender, render times are a whopping 38% faster than the last generation.

Get started with GPU acceleration for photography, graphic design and video editing workflows using GeForce RTX 4050 GPUs, which provide a massive upgrade from integrated graphics. Access accelerated AI features, including 54% faster performance in Topaz Video for upscaling and deinterlacing footage. And turn home offices into professional-grade studios with NVIDIA’s encoder and the AI-powered NVIDIA Broadcast app for livestreaming.

Freelancers, hobbyists, aspiring artists and others can find a GeForce RTX GPU to fit their needs, now available in the new lineup of NVIDIA Studio laptops.

Potent, Portable, Primed for Creating

Samsung’s Galaxy Book3 Ultra comes with a choice of the GeForce RTX 4070 or 4050 GPU, alongside a vibrant 16-inch, 3K, AMOLED display.

Pick one up at Best Buy or on Samsung.com.

The Samsung Galaxy Book3 Ultra houses the GeForce RTX 4070 or 4050 GPU.

GIGABYTE upgraded its Aero 16 Studio laptop with up to a GeForce RTX 4070 GPU and a 16-inch, thin-bezel, 60Hz, OLED display. The Aero 14 features a GeForce RTX 4050 GPU with a 14-inch, thin-bezel, 90Hz, OLED display.

Purchase the Aero 14 from Amazon, and find both laptops on GIGABYTE.com.

GIGABYTE’s Aero 16 and 14 models with up to a GeForce RTX 4070 GPU are content-creation beasts.

The ASUS ROG FLOW Z13 comes with up to a GeForce RTX 4060 GPU, QHD, 165Hz, 13.4-inch Nebula display, as well as a 170-degree kickstand and detachable full-sized keyboard for portable creating, plus a stylus with NVIDIA Canvas support to turn simple brushstrokes into realistic images powered by AI.

Get one from ASUS.com.

The ASUS ROG FLOW Z13 is equipped with up to a GeForce RTX 4060 GPU.

MSI’s Stealth 17 Studio and Razer’s 16 and 18 models, with up to GeForce RTX 4090 Laptop GPUs, are also available to pick up today.

All Aboard the Creative Ship

Studio laptops power the imaginations of the world’s most creative minds, including this week’s In the NVIDIA Studio artist, Shangyu Wang.

From the moment his movie’s opening credits roll, viewers can expect to be captivated by a spellbinding journey in space and an intricately designed world, complemented by engaging music and voice-overs.

The film, Most Precious Gift, centers on humanity attempting to make peace with another intelligent lifeform holding the key to survival. It’s an extension of Wang’s interests in alien civilizations and their potential conflicts with humankind.

Wang usually jumps directly into 3D modeling, bypassing the concept stage that most artists go through. He sculpts and shapes the models in Autodesk Maya and Autodesk Fusion 360.

Ultra-fine details modeled in Autodesk Maya.

By selecting the default Autodesk Arnold renderer, using his GeForce RTX 3080 Ti-powered Studio laptop, Wang was able to use RTX-accelerated ray tracing and AI denoising, which let him tinker with and add details to highly interactive, photorealistic visuals. This was a boon for his efficiency.

Clothing segments combined and applied to the 3D model in Autodesk Maya.

Wang built textures in Adobe Substance 3D Painter and placed extra care on the fine details, noting the app was the “best option for the most realistic, original materials.” RTX-accelerated light and ambient occlusion guaranteed fully baked assets in mere seconds.

Realistic textures applied to 3D models in Adobe Substance 3D Painter.

For final renders, Wang said it was a no-brainer to assemble, simulate and stage his 3D scenes in Omniverse Create. “Because of the powerful path-tracing rendering, I can modify scene lights and materials in real time,” he said.

 

And when it came to final exports, Wang could use his preferred renderer within the Omniverse Create viewport, which has support for Pixar HD Storm, Chaos V-Ray, Maxon’s Redshift, OTOY OctaneRender, Blender Cycles and more.

Realistic lighting and shadows, manipulated and tinkered with in Omniverse Create.

Wang wrapped up compositing in NUKE software, where he adjusted colors and added depth-of-field visuals to the lens. The artist finally moved to DaVinci Resolve to add sound effects, music and subtitles.

3D artist Shangyu Wang.

Check out more of Wang’s work on ArtStation.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. Learn more about Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More

Survey Reveals How Telcos Plan to Ring in Change Using AI

Survey Reveals How Telcos Plan to Ring in Change Using AI

The telecommunications industry has for decades helped advance revolutionary change – enabling everything from telephones and television to online streaming and self-driving cars. Yet the industry has long been considered an evolutionary mover in its own business.

A recent survey of more than 400 telecommunications industry professionals from around the world found that same cautious tone in how they plan to define and execute on their AI strategies.

To fill in a more complete picture of how the telecommunications industry is using AI, and where it’s headed, NVIDIA’s first “State of AI in Telecommunications” survey consisted of questions covering a range of AI topics, infrastructure spending, top use cases, biggest challenges and deployment models.

Survey respondents included C-suite leaders, managers, developers and IT architects from mobile telecoms, fixed and cable companies. The survey was conducted over eight weeks between mid-November 2022 and mid-January 2023.

Dial AI for Motivation

The survey results revealed two consistent themes: industry players (73%) see AI as a tool to grow revenue, improve operations and sustainability, or boost customer retention. Amid skepticism about the money-making potential of 5G, telecoms see efficiencies driven by AI as the most likely path for returns on investment.

Yet, 93% of those responding to questions about undertaking AI projects at their own companies appear to be substantially underinvesting in AI as a percentage of annual capital spending.

Some 50% of respondents reported spending less than $1 million last year on AI projects; a year earlier, 60% of respondents said they spent less than $1 million on AI. Just 3% of respondents spent over $50 million on AI in 2022.

The reasons cited for such cautious spending? Some 44% of respondents reported an inability to adequately quantify return on investment, which illustrates a mismatch between aspirations and the reality in introducing AI-driven solutions.

Technical challenges — whether from lack of enough skilled personnel or poor infrastructure — are also obstructing AI adoption. Of respondents, 34% cited an insufficient number of data scientists as the second-biggest challenge. Given that data scientists are sought after across industries, the response suggests that the telecoms industry needs to push harder to woo them.

With 33% of respondents also citing a lack of budget for AI projects, the results suggest that AI advocates need to work harder with decision-makers to develop a convincing case for AI adoption.

Likewise, for a technology solution that relies on data, concerns about the availability, handling, privacy and security of data were all critical issues to be addressed, especially in the light of data privacy and data residency laws around the globe, for example GDPR.

AI Engagement

Some 95% of telecommunications industry respondents said they were engaged with AI. But only 34% of respondents reported using AI for more than six months, while 23% said they’re still learning about the different options for AI. Eighteen percent reported being in a trial or pilot phase of an AI project.

For respondents at the trial or implementation stage, a clear majority acknowledged that there had been a positive impact on both revenue and cost. About 73% of respondents reported that implementation of AI had led to increased revenue in the last year, with 17% noting revenue gains of more than 10% in specific parts of the business.

Likewise, 80% of respondents reported that their implementation of AI led to reduced annual costs in the last year, with 15% noting that this cost reduction is above 10% — again, in specific parts of their business.

AI, AI Everywhere

The telecommunications industry has a deep and multilayered view on where best to allocate resources to AI: cost reduction, revenue increase, customer experience enhancement and creating operational efficiencies were all cited as key priorities.

In terms of deployment, however, AI focused on improving operational efficiency was a clear winner. This is somewhat expected, as the operational complexity of new telecommunications networks like 5G lend themselves to new solutions like AI. The industry is responsible for critical national infrastructure in every country, supports over 5 billion customer end points, and is expected to constantly deliver above 99% reliability. Telcos have also discussed AI-enabled solutions for network operations, cell sites planning, truck-routing optimization and machine learning data analytics. To improve the customer experience, some are adopting recommendation engines, virtual assistants and digital avatars.

In the near term, the focus appears to be on building more effective telecom infrastructure and unlocking new revenue-generating opportunities, especially together with partners.

The trick will be moving from early testing to widespread adoption.

Download the “State of AI in Telecommunications: 2023 Trends” report for in-depth results and insights.

Learn more about how telcos are leveraging AI to optimize operations and improve customer experiences.

Read More

Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC

Transportation Generation: See How AI and the Metaverse Are Shaping the Automotive Industry at GTC

Novel AI technologies are generating images, stories and, now, new ways to imagine the automotive future.

At NVIDIA GTC, a global conference for the era of AI and the metaverse running online March 20-23, industry luminaries working on these breakthroughs will come together and share their visions to transform transportation.

This year’s slate of in-depth sessions includes leaders from automotive, robotics, healthcare and other industries, as well as trailblazing AI researchers.

Headlining GTC is NVIDIA founder and CEO Jensen Huang, who will present the latest in AI and NVIDIA Omniverse, a platform for creating and operating metaverse applications, in a keynote address on Tuesday, March 21, at 8 a.m. PT.

Conference attendees will have plenty of opportunities to network and learn from NVIDIA and industry experts about the technologies powering the next generation of automotive.

Here’s what to expect from auto sessions at GTC:

End-to-End Innovation

The entire automotive industry is being transformed by AI and metaverse technologies, whether they’re used for design and engineering, manufacturing, autonomous driving or the customer experience.

Speakers from these areas will share how they’re using the latest innovations to supercharge development:

  • Sacha Vražić, director of autonomous driving R&D at Rimac Technology, discusses how the supercar maker is using AI to teach any driver how to race like a professional on the track.
  • Toru Saito, deputy chief of Subaru Lab at Subaru Corporation, walks through how the automaker is improving camera perception with AI, using large-dataset training on GPUs and in the cloud.
  • Tom Xie, vice president at ZEEKR, explains how the electric vehicle company is rethinking the electronic architecture in EVs to develop a software-defined lineup that is continuously upgradeable.
  • Liz Metcalfe-Williams, senior data scientist, and Otto Fitzke, machine learning engineer at Jaguar Land Rover, cover key learnings from the premium automaker’s research into natural language processing to improve knowledge and systems, and to accelerate the development of high-quality, validated, cutting-edge products.
  • Marco Pavone, director of autonomous vehicle research; Sanja Fidler, vice president of AI research; and Sarah Tariq, vice president of autonomous vehicle software at NVIDIA, show how generative AI and novel, highly integrated system architectures will radically change how AVs are designed and developed.

Develop Your Drive

In addition to sessions from industry leaders, GTC attendees can access talks on the latest NVIDIA DRIVE technologies led by in-house experts.

NVIDIA DRIVE Developer Days consist of a series of deep-dive sessions on building safe and robust autonomous vehicles. Led by the NVIDIA engineering team, these talks will highlight the newest DRIVE features and how to apply them.

Topics include high-definition mapping, AV simulation, synthetic data generation for testing and validation, enhancing AV safety with in-system testing, and multi-task models for AV perception.

Access these virtual sessions and more by registering free to attend and see the technologies generating the intelligent future of transportation.

Read More

UK’s Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe

UK’s Conservation AI Makes Huge Leap Detecting Threats to Endangered Species Across the Globe

The video above represents one of the first times that a pangolin, one of the world’s most critically endangered species, was detected in real time using artificial intelligence.

A U.K.-based nonprofit called Conservation AI made this possible with the help of NVIDIA technology. Such use of AI can help track even the rarest, most reclusive of species in real time, enabling conservationists to protect them from threats, such as poachers and fires, before it’s too late to intervene.

The organization was founded four years ago by researchers at Liverpool John Moores University — Paul Fergus, Carl Chalmers, Serge Wich and Steven Longmore.

In the past year and a half, Conservation AI has deployed 70+ AI-powered cameras across the world. These help conservationists preserve biodiversity through real-time detection of threats using deep learning models trained with transfer learning.

“It’s very simple — if we don’t protect our biodiversity, there won’t be people on this planet,” said Chalmers, who teaches deep learning and applied AI at Liverpool John Moores University. “And without AI, we’re never going to achieve our targets for protecting endangered species.”

The Conservation AI platform — built using NVIDIA Jetson modules for edge AI and the NVIDIA Triton Inference Server — in just four seconds analyzes footage, identifies species of interest and alerts conservationists and other users of potential threats via email.

It can also rapidly model trends in biodiversity and habitat health using a huge database of images and other metadata that would otherwise take years to analyze. The platform now enables conservationists to identify these trends and species activities in real time.

Conservation AI works with 150 organizations across the globe, including conservation societies, safaris and game reserves. To date, the platform has processed over 2 million images, about half of which were from the past three months.

Saving Time to Save Species

Threats to biodiversity have long been monitored using camera traps — networks of cameras equipped with infrared sensors that are placed in the wild. But camera traps can produce data that is hard to manage, as there’s often much variability in images of the animals and their environments.

“A typical camera trap study can take three years to analyze, so by the time you get the insights, it’s too late to do anything about the threat to those species,” said Fergus, a professor of machine learning at Liverpool John Moores University. “Conservation AI can analyze the same amount of data and send results to conservation teams so that interventions can happen in real time, all enabled by NVIDIA technology.”

Many endangered species occupy remote areas without access to human communication systems. The team uses NVIDIA Jetson AGX Xavier modules to analyze drone footage from such areas streamed to a smart controller that can count species population or alert conservationists when species of interest are detected.

Energy-efficient edge AI provided by the Jetson modules, which are equipped with Triton Inference Server, has sped up deep learning inference by 4x compared to the organization’s previous methods, according to Chalmers.

“We chose Triton because of the elasticity of the framework and the many types of models it supports,” he added. “Being able to train the models on the NVIDIA accelerated computing stack means we can make huge improvements on the models very, very quickly.”

Conservation AI trains and inferences its deep learning models with NVIDIA RTX 8000, T4 and A100 Tensor Core GPUs — along with the NVIDIA CUDA toolkit. Fergus called NVIDIA GPUs “game changers in the world of applied AI and conservation, where there are big-data challenges.”

In addition, the team’s species-detection pipeline is built on the NVIDIA DeepStream software development kit for vision AI applications, which enables real-time video inference in the field.

“Without this technology, helicopters would normally be sent up to observe the animals, which is hugely expensive and bad for the environment as it emits huge amounts of carbon dioxide,” Chalmers said. “Conservation AI technology helps reduce this problem and detects threats to animals before it’s too late to intervene.”

Detecting Pangolins, Rhinos and More

The Conservation AI platform has been deployed by Chester Zoo, a renowned conservation society based in the U.K., to detect poachers in real time, including those hunting pangolins in Uganda.

Since many endangered species, like pangolins, are so elusive, obtaining enough imagery of them to train AI models can be difficult. So, the Conservation AI team is working with NVIDIA to explore the use of synthetic data for model training.

The platform is also deployed at a game reserve in Limpopo, South Africa, where the AI keeps an eye on wildlife in the region, including black and white rhinos.

“Pound for pound, rhino horn is worth more than diamond,” Chalmers said. “We’ve basically created a geofence around these rhinos, so the reserve can intervene as soon as a poacher or another type of threat is detected.”

The organization’s long-term goal, Fergus said, is to create a toolkit that supports conservationists with many types of efforts, including wildlife monitoring through satellite imagery, as well as using deep learning models that analyze audio — like animal cries or the sounds of a forest fire.

“The loss of biodiversity is really a ticking time bomb, and the beauty of NVIDIA AI is that it makes every second count,” Chalmers said. “Without the NVIDIA accelerated computing stack, we just wouldn’t be able to do this — we wouldn’t be able to tackle climate change and reverse biodiversity loss, which is the ultimate dream.”

Read more about how NVIDIA technology helps to boost conservation and prevent poaching.

Featured imagery courtesy of Chester Zoo.

Read More

Rise to the Cloud: ‘Monster Hunter Rise’ and ‘Sunbreak’ Expansion Coming Soon to GeForce NOW

Rise to the Cloud: ‘Monster Hunter Rise’ and ‘Sunbreak’ Expansion Coming Soon to GeForce NOW

Fellow Hunters, get ready! This GFN Thursday welcomes Capcom’s Monster Hunter Rise and the expansion Sunbreak to the cloud, arriving soon for members.

Settle down for the weekend with 10 new games supported in the GeForce NOW library, including The Settlers: New Allies.

Plus, Amsterdam and Ashburn are next to light up on the RTX 4080 server map, giving nearby Ultimate members the power of an RTX 4080 gaming rig in the cloud. Keep checking the weekly GFN Thursday to see where the RTX 4080 SuperPOD upgrade rolls out next.

Palicos, Palamutes and Wyverns, Oh My

The hunt is on! Monster Hunter Rise, the popular action role-playing game from Capcom, is joining GeForce NOW soon. Protect the bustling Kamura Village from ferocious monsters; take on hunting quests with a variety of weapons and new hunting actions with the Wirebug; and work alongside a colorful cast of villagers to defend their home from the Rampage — a catastrophic event that nearly destroyed the village 50 years prior.

Members can expand the hunt with Monster Hunter Rise: Sunbreak, which adds new quests, monsters, locales, gear and more. And regular updates keep Hunters on the job, like February’s Free Title Update 4, which marks the return of the Elder Dragon Velkhana, the lord of the tundra that freezes all in its path.

Monster Hunter Rise Sunbreak on GeForce NOW
Carve out more time for monster hunting by playing in the cloud.

Whether playing solo or with a buddy, GeForce NOW members can take on dangerous new monsters anytime, anywhere. Ultimate members can protect Kamura Village at up to 4K at 120 frames per second — or immerse themselves in the most epic monster battles at ultrawide resolutions and 120 fps. Members won’t need to wait for downloads or worry about storage space, and can take the action with them across nearly all of their devices.

Rise to the challenge by upgrading today and get ready for Monster Hunter Rise to hit GeForce NOW soon.

New Week, New Games

The Settlers New Allies on GeForce NOW
Onward! There’s much to explore in the Forgotten Plains.

Kick off the weekend with 10 new titles, including The Settlers: New Allies. Choose among three unique factions and explore this whole new world powered by state-of-the-art graphics. Your settlement has never looked so lively.

Check out the full list of this week’s additions:

  • Labyrinth of Galleria: The Moon Society (New release on Steam)
  • Wanted: Dead (New release on Steam and Epic)
  • Elderand (New release on Steam, Feb. 16)
  • Wild West Dynasty (New release on Steam, Feb. 16)
  • The Settlers: New Allies (New release on Ubisoft, Feb. 17)
  • Across the Obelisk (Steam)
  • Captain of Industry (Steam)
  • Cartel Tycoon (Steam)
  • SimRail — The Railway Simulator (Steam)
  • Warpips (Epic Games Store)

The monthlong #3YearsOfGFN celebration continues on our Twitter and Facebook channels. Members shared the most beautiful place they’ve visited in-game on GFN.

And make sure to check out the question we have this week for GeForce NOW’s third anniversary celebration!

 

Read More

Redefining Workstations: NVIDIA, Intel Unlock Full Potential of Creativity and Productivity for Professionals

Redefining Workstations: NVIDIA, Intel Unlock Full Potential of Creativity and Productivity for Professionals

AI-augmented applications, photorealistic rendering, simulation and other technologies are helping professionals achieve business-critical results from multi-app workflows faster than ever.

Running these data-intensive, complex workflows, as well as sharing data and collaborating across geographically dispersed teams, requires workstations with high-end CPUs, GPUs and advanced networking.

To help meet these demands, Intel and NVIDIA are powering new platforms with the latest Intel Xeon W and Intel Xeon Scalable processors, paired with NVIDIA RTX 6000 Ada generation GPUs, as well as NVIDIA ConnectX-6 SmartNICs.

These new workstations bring together the highest levels of AI computing, rendering and simulation horsepower to tackle demanding workloads across data science, manufacturing, broadcast, media and entertainment, healthcare and more.

“Professionals require advanced power and performance to run the most intensive workflows, like using AI, rendering in real time or running multiple applications simultaneously,” said Bob Pette, vice president of professional visualization at NVIDIA. “The new Intel- and NVIDIA-Ada powered workstations deliver unprecedented speed, power and efficiency, enabling professionals everywhere to take on the most complex workflows across all industries.”

“The latest Intel Xeon W processors — featuring a breakthrough new compute architecture — are uniquely designed to help professional users tackle the most challenging current and future workloads,” said Roger Chandler, vice president and general manager of Creator and Workstation Solutions in the Client Computing Group at Intel. “Combining our new Intel Xeon workstation processors with the latest NVIDIA GPUs will unleash the innovation and creativity of professional creators, artists, engineers, designers, data scientists and power users across the world.”

Serving New Workloads 

Metaverse applications and the rise of generative AI require a new level of computing power from the underlying hardware. Creating digital twins in a simulated photorealistic environment that obeys the laws of physics and planning factories are just two examples of workflows made possible by NVIDIA Omniverse Enterprise, a platform for creating and operating metaverse applications.

BMW Group, for example, is using NVIDIA Omniverse Enterprise to design an end-to-end digital twin of an entire factory. This involves collaboration with thousands of planners, product engineers and facility managers in a single virtual environment to design, plan, simulate and optimize highly complex manufacturing systems before a factory is actually built or a new product is integrated into the real world.

The need for accelerated computing power is growing exponentially due to the explosion of AI-augmented workflows, from traditional R&D and data science workloads to edge devices on factory floors or in security offices, to generative AI solutions for text conversations and text-to-image applications.

Extended reality (XR) solutions for collaborative work also require significant computing resources. Examples of XR applications include design reviews, product design validation, maintenance and support training, rehearsals, interactive digital twins and location-based entertainment. All of these demand high-resolution, photoreal images to create the most intuitive and compelling immersive experiences, whether available locally or streamed to wireless devices.

Next-Generation Platform Features 

With a breakthrough new compute architecture for faster individual CPU cores and new embedded multi-die interconnect bridge packaging, the Xeon W-3400 and Xeon W-2400 series of processors enable unprecedented scalability for increased workload performance. Available with up to 56 cores in a single socket, the top-end Intel Xeon w9-3495X processor features a redesigned memory controller and larger L3 cache, delivering up to 28% more single-threaded(1) and 120% more multi-threaded(2) performance over the previous- generation Xeon W processors.

Based on the NVIDIA Ada Lovelace GPU architecture, the latest NVIDIA RTX 6000 brings incredible power efficiency and performance to the new workstations. It features 142 third-generation RT Cores, 568 fourth-generation Tensor Cores and 18,176 latest-generation CUDA cores combined with 48GB of high-performance graphics memory to provide up to 2x ray-tracing, AI, graphics and compute performance over the previous generation.

NVIDIA ConnectX-6 Dx SmartNICs enable professionals to handle demanding, high-bandwidth 3D rendering and computer-aided design tasks, as well as traditional office work with line-speed network connectivity support based on two 25Gbps ports and GPUDirect technology for increasing GPU bandwidth by 10x over standard NICs. The high-speed, low-latency networking and streaming capabilities enable teams to move and ingest large datasets or to allow remote individuals to collaborate across applications for design and visualization.

Availability 

The new generation of workstations powered by the latest Intel Xeon W and Intel Scalable processors and NVIDIA RTX Ada generation GPUs will be available for preorder beginning today from BOXX and HP, with more coming soon from other workstation system integrators.

To learn more, tune into the launch event.

 

(1) Based on SPEC CPU 2017_Int (1-copy) using Intel validation platform comparing Intel Xeon w9-3495X (56c) versus previous generation Intel Xeon W-3275 (28c).
(2) Based on SPEC CPU 2017_Int (n-copy) using Intel validation platform comparing Intel Xeon w9-3495X (56c) versus previous generation Intel Xeon W-3275 (28c).

Read More

Blender Alpha Release Comes to Omniverse, Introducing Scene Optimization Tools, Improved AI-Powered Character Animation

Blender Alpha Release Comes to Omniverse, Introducing Scene Optimization Tools, Improved AI-Powered Character Animation

Whether creating realistic digital humans that can express emotion or building immersive virtual worlds, 3D artists can reach new heights with NVIDIA Omniverse, a platform for creating and operating metaverse applications.

A new Blender alpha release, now available in the Omniverse Launcher, lets users of the 3D graphics software optimize scenes and streamline workflows with AI-powered character animations.

Save Time, Effort With New Blender Add-Ons

The new scene optimization add-on in the Blender release enables creators to fix bad geometry and generate automatic UVs, or 2D maps of 3D objects. It also reduces the number of polygons that need to be rendered to increase the scene’s overall performance, which significantly brings down file size, as well as CPU and GPU memory usage.

Plus, anyone can now accomplish what used to require a technical rigger or animator using an Audio2Face add-on.

A panel in the add-on makes it easier to use Blender characters in Audio2Face, an AI-enabled tool that automatically generates realistic facial expressions from an audio file.

This new functionality eases the process of bringing generated face shapes back onto rigs — that is, digital skeletons — by applying shapes exported through the Universal Scene Description (USD) framework onto a character even if it is fully rigged, meaning its whole body has a working digital skeleton. The integration of the facial shapes doesn’t alter the rigs, so Audio2Face shapes and animation can be applied to characters — whether for games, shows and films, or simulations — at any point in the artist’s workflow.

Realistic Character Animation Made Easy

Audio2Face puts AI-powered facial animation in the hands of every Blender user who works with Omniverse.

Using the new Blender add-on for Audio2Face, animator and popular YouTuber Marko Matosevic, aka Markom 3D, rigged and animated a Battletoads-inspired character using just an audio file.

Australia-based Matosevic joined Dave Tyner, a technical evangelist at NVIDIA, on a livestream to showcase their live collaboration across time zones, connecting 3D applications in a real-time Omniverse jam session. The two used the new Blender alpha release with Omniverse to make progress on one of Matosevic’s short animations.

The new Blender release was also on display last month at CES in The Artists’ Metaverse, a demo featuring seven artists, across time zones, who used Omniverse Nucleus Cloud, Autodesk, SideFX, Unreal Engine and more to create a short cinematic in real time.

Creators can save time and simplify processes with the add-ons available in Omniverse’s Blender build.

NVIDIA principal artist Zhelong Xu, for example, used Blender and Omniverse to visualize an NVIDIA-themed “Year of the Rabbit” zodiac.

“I got the desired effect very quickly and tested a variety of lighting effects,” said Xu, an award-winning 3D artist who’s previously worked at top game studio Tencent and made key contributions to an animated show on Netflix.

Get Plugged Into the Omniverse 

Learn more about Blender and Omniverse integrations by watching a community livestream on Wednesday, Feb. 15, at 11 a.m. PT via Twitch and YouTube.

And the session catalog for NVIDIA GTC, a global AI conference running online March 20-23, features hundreds of curated talks and workshops for 3D creators and developers. Register free to hear from NVIDIA experts and industry luminaries on the future of technology.

Creators and developers can download NVIDIA Omniverse free. Enterprises can try Omniverse Enterprise free on NVIDIA LaunchPad. Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Read More