NVIDIA Makes RTX Technology Accessible to More Professionals

With its powerful real-time ray tracing and AI acceleration capabilities, NVIDIA RTX technology has transformed design and visualization workflows for the most complex tasks, like designing airplanes and automobiles, visual effects in movies and large-scale architectural design.

The new NVIDIA RTX A2000 — our most compact, power-efficient GPU for a wide range of standard and small-form-factor workstations — makes it easier to access RTX from anywhere.

The RTX A2000 is designed for everyday workflows, so professionals can develop photorealistic renderings, build physically accurate simulations and use AI-accelerated tools. With it, artists can create beautiful 3D worlds, architects can design and virtually explore the next generation of smart buildings and homes, and engineers can create energy-efficient and autonomous vehicles that will drive us into the future.

The GPU has 6GB of memory capacity with error correction code (ECC) to maintain data integrity for uncompromised computing accuracy and reliability, which especially benefits the healthcare and financial services fields.

With remote work part of the new normal, simultaneous collaboration with colleagues on projects across the globe is critical. NVIDIA RTX technology powers Omniverse, our collaboration and simulation platform that enables teams to iterate together on a single 3D design in real time while working across different software applications. The A2000 will serve as a portal into this world for millions of designers.

Customer Adoption 

Among the first to tap into the RTX A2000 are Avid, Cuhaci & Peterson and Gilbane Building Company.

“The A2000 from NVIDIA has made our modeling flow faster and more efficient. No longer are we sitting and wasting valuable time for graphics to render, and panning around complex geometry has become smoother,” said Connor Reddington, mechanical engineer and certified SOLIDWORKS professional at Avid Product Development, a Lubrizol Company.

A custom lattice pillow structure for lightweighting of 3D printed parts. Image courtesy of Avid.

“Introducing RT Cores into the NVIDIA RTX A2000 has resulted in impressive rendering speedups for photorealistic visualization compared to the previous generation GPUs,” said Steven Blevins, director of Digital Practice at Cuhaci & Peterson.

“The small form factor and low power usage of the NVIDIA RTX A2000 is extraordinary and ensures fitment in just about any existing workstation chassis,” said Ken Grothman, virtual design and construction manager at Gilbane Building Company.

A building model in Autodesk Revit with point cloud data. Image courtesy of Gilbane Building Company.

Next-Generation RTX Technology

The NVIDIA RTX A2000 is the most powerful low-profile, dual-slot GPU for professionals. It combines the latest-generation RT Cores, Tensor Cores and CUDA cores with 6GB of ECC graphics memory in a compact form factor to fit a wide range of systems.

The NVIDIA RTX A2000 features the latest technologies in the NVIDIA Ampere architecture:

  • Second-Generation RT Cores: Real-time ray tracing for all professional workflows. Up to 5x the rendering performance from the previous generation with RTX on.
  • Third-Generation Tensor Cores: Available in the GPU architecture to enable AI-augmented tools and applications.
  • CUDA Cores: Up to 2x the FP32 throughput of the previous generation for significant increases in graphics and compute workloads.
  • Up to 6GB of GPU Memory: Supports ECC memory, the first time that NVIDIA has enabled ECC memory in its 2000 series GPUs, for error-free computing.
  • PCIe Gen 4: Double the throughput with more than 40 percent bandwidth improvement from the previous generation for accelerating data paths in and out of the GPU.

Availability 

The NVIDIA RTX A2000 desktop GPU will be available in workstations from manufacturers including ASUS, BOXX Technologies, Dell Technologies, HP and Lenovo as well as NVIDIA’s global distribution partners starting in October.

Learn more about NVIDIA at SIGGRAPH.

The post NVIDIA Makes RTX Technology Accessible to More Professionals appeared first on The Official NVIDIA Blog.

Read More

A Code for the Code: Simulations Obey Laws of Physics with USD

Life in the metaverse is getting more real. 

Starting today, developers can create and share realistic simulations in a standard way. Apple, NVIDIA and Pixar Animation Studios have defined a common approach for expressing physically accurate models in Universal Scene Description (USD), the common language of virtual 3D worlds. 

Pixar released USD and described it in 2016 at SIGGRAPH. It was originally designed so artists could work together, creating virtual characters and environments in a movie with the tools of their choice. 

Fast forward, and USD is now pervasive in animation and special effects. USD is spreading to other professions like architects who can benefit from their tools to design and test everything from skyscrapers to sports cars and smart cities. 

Playing on the Big Screen 

To serve the needs of this expanding community, USD needs to stretch in many directions. The good news is Pixar designed USD to be open and flexible. 

So, it’s fitting the SIGGRAPH 2021 keynote provides a stage to describe USD’s latest extension. In technical terms, it’s a new schema for rigid-body physics, the math that describes how solids behave in the real world.  

For example, when you’re simulating a game where marbles roll down ramps, you want them to respond just as you would expect when they hit each other. To do that, developers need physical details like the weight of the marbles and the smoothness of the ramp. That’s what this new extension supplies. 

USD Keeps Getting Better

The initial HTML 1.0 standard, circa 1993, defined how web pages used text and graphics. Fifteen years later HTML5 extended the definition to include video so any user on any device could watch videos and movies. 

Apple and NVIDIA were both independently working on ways to describe physics in simulations. As members of the SIGGRAPH community, we came together with Pixar to define a single approach as a new addition to USD. 

In the spirit of flexibility, the extension lets developers choose whatever solvers they prefer as they can all be driven from the same set of USD-data. This presents a unified set of data suitable for off-line simulation for film, to games, to augmented reality. 

That’s important because solvers for real-time uses like gaming prioritize speed over accuracy, while architects, for example, want solvers that put accuracy ahead of speed. 

An Advance That Benefits All 

Together the three companies wrote a white paper describing their combined proposal and shared it with the USD community. The reviews are in and it’s a hit. Now the extension is part of the standard USD distribution, freely available for all developers. 

The list of companies that stand to benefit reads like credits for an epic movie. It includes architects, building managers, product designers and manufacturers of all sorts, companies that design games — even cellular providers optimizing layouts of next-generation networks. And, of course, all the vendors that provide the digital tools to do the work. 

“USD is a major force in our industry because it allows for a powerful and consistent representation of complex, 3D scene data across workflows,” said Steve May, Chief Technology Officer at Pixar. 

“Working with NVIDIA and Apple, we have developed a new physics extension that makes USD even more expressive and will have major implications for entertainment and other industries,” he added. 

Making a Metaverse Together 

It’s a big community we aim to serve with NVIDIA Omniverse, a collaboration environment that’s been described as an operating system for creatives or “like Google Docs for 3D graphics.” 

We want to make it easy for any company to create lifelike simulations with the tools of their choice. It’s a goal shared by dozens of organizations now evaluating Omniverse Enterprise, and close to 400 companies and tens of thousands of individual creators who have downloaded Omniverse open beta since its release in December 2020.  

We envision a world of interconnected virtual worlds — a metaverse — where someday anyone can share their life’s work.  

Making that virtual universe real will take a lot of hard work. USD will need to be extended in many dimensions to accommodate the community’s diverse needs. 

A Virtual Invitation 

To get a taste of what’s possible, watch a panel discussion from GTC (free with registration), where 3D experts from nine companies including Pixar, BMW Group, Bentley Systems, Adobe and Foster + Partners talked about the opportunities and challenges ahead.   

We’re happy we could collaborate with engineers and designers at Apple and Pixar on this latest USD extension. We’re already thinking about a sequel for soft-body physics and so much more.  

Together we can build a metaverse where every tool is available for every job. 

For more details, watch a talk on the USD physics extension from NVIDIA’s Adam Moravanszky and attend a USD birds-of-a-feather session hosted by Pixar. 

The post A Code for the Code: Simulations Obey Laws of Physics with USD appeared first on The Official NVIDIA Blog.

Read More

On the Air: Creative Technology Elevates Broadcast Workflows for International Sporting Event with NVIDIA Networking

Talk about a signal boost. Creative Technology is tackling 4K and 8K signals, as well as new broadcast workflows, with the latest NVIDIA networking technologies.

The London-based firm is one of the world’s leading suppliers of audio visual equipment for broadcasting and online events. Part of global production company NEP Group, CT helps produce high-quality virtual and live events by providing advanced technologies and equipment, from large-screen displays to content delivery systems.

Before the COVID-19 pandemic hit, CT was looking to enhance the broadcast experience, bringing audiences and content closer together. Already in the process of switching from a baseband software-defined infrastructure (SDI) architecture to more advanced internet protocol (IP)-based technologies, CT was prepared when the pandemic led to an increased demand in virtual events.

The company decided to invest in KAIROS, Panasonic’s next-generation IT and IP video processing platform. KAIROS is a software-based, open architecture platform that uses CPU and GPU processing to significantly improve broadcast performance.

CT opted for NVIDIA GPUs to power KAIROS, which uses NVIDIA Rivermax IP streaming acceleration to enable direct data transfers to and from the GPU, leading to enhanced flexibility and increased performance for virtual events.

With plans to use KAIROS for the world’s most recognized sporting event this month, CT is using IP enabled by NVIDIA switches and NVIDIA RTX GPUs. This technology allows CT to easily scale up for larger shows and save time in setting up new productions, while transforming broadcast workflows.

Taking Broadcast Beyond the Standard

With LED screens increasing in resolution, it’s now more common for companies to deal with 4K and 8K signals. CT wanted a powerful solution that could keep up, while also providing better scalability and flexibility to enhance workflows.

When CT first started testing KAIROS, they were discussing using the platform to accommodate a 3G-SDI workflow, which supports the move from 1080/50 interlaced video formats (1080i) to 1080/50 progressive video formats (1080p).

In interlaced scanning, the frame is divided into odd and even lines — only half the frame is shown on screen, and the other half appears in 1/60th of a second. The lines switch so quickly that viewers will see the entire frame, but they may also see flickers on screen.

In progressive scans, the entire frame is transmitted simultaneously. All the lines in the frame are shown at once to fill the screen, which reduces flicker. Progressive scans are ideal for digital transmissions and have become the standard for high-definition TV displays.

But CT also needed to ensure its technology could keep up with any future video workflow advances demanded by clients.

The company has its own servers built on NVIDIA RTX GPUs with ConnectX-6 DX cards, and KAIROS delivers high performance by using the power and flexibility of the GPUs. The CT team no longer has to deal with the painful process of converting 4K and 8K signals to SDI. Instead, it can pass the signals to KAIROS, which can distribute video feeds to projectors or screens regardless of the resolution or format.

“Essentially, what KAIROS did was give us a lot more flexibility,” said Sid Lobb, head of Vision and Integrated Networks at Creative Technology. “There is utter flexibility with what we can use and how we allocate the power that the NVIDIA RTX GPUs provide.”

Switching It Up 

Transitioning from SDI to IP allowed CT to use software for driving all the events. With IP, CT can use a switch instead of cables to connect systems.

“Now, it’s more like connecting computers to each other versus directly connecting cameras to a processor,” said Lobb. “We’re able to use a network to connect the entire production signal path. It’s a whole change to broadcast workflows.”

The latest version of KAIROS enables CT to use the network as a matrix switcher, which allows the team to easily switch from one video or audio source to another. For example, in events that take place in a sports arena, there could be up to 100 PCs capturing and producing different content. During the event, CT could be switching from one PC to another, which would’ve been challenging with traditional architectures. But with IP, CT can easily switch among sources, and also scale up and down to different size shows using the same solution.

The team is also experiencing massive time savings when it comes to getting new productions up and running, as the programming of KAIROS is intuitive and efficient. Each virtual event is different, but KAIROS makes it easy for CT to configure input and outputs based on their productions.

The team will use GPU-powered solutions to enhance the experience for future broadcasting and live events.

The post On the Air: Creative Technology Elevates Broadcast Workflows for International Sporting Event with NVIDIA Networking appeared first on The Official NVIDIA Blog.

Read More

NVIDIA-Certified Systems Land on the Desktop

Enterprises challenged with running accelerated workloads have an answer: NVIDIA-Certified Systems. Available from nearly 20 global computer makers, these servers have been validated for running a diverse range of accelerated workloads with optimum performance, reliability and scale.

Now NVIDIA-Certified Systems are expanding to the desktop with workstations that undergo the same testing to validate their ability to run GPU-accelerated applications well.

Certification ensures that these systems, available as desktop or laptop models, have a well-balanced design and the correct configurations to maximize performance. GPUs eligible for certification in the workstations include the newest NVIDIA RTX A6000, A5000 and A4000, as well as the RTX 8000 and 6000.

NVIDIA-Certified workstations will join a lineup of over 90 already available systems that range from the highest performance AI servers with the NVIDIA HGX A100 8-GPU, to enterprise-class servers with the NVIDIA A30 Tensor Core GPU for mainstream accelerated data centers, to low-profile, low-power systems designed for the edge with NVIDIA T4 GPUs.

Certified Systems to Accelerate Data Science on CDP

Cloudera Data Platform (CDP) v7.1.6, which went into general availability last week, now takes advantage of NVIDIA-Certified Systems. This latest version adds RAPIDS to accelerate data analytics, ETL and popular data science tools like Apache Spark with NVIDIA GPUs to churn through massive data operations.

Testing has shown that this version of CDP runs up to 10x faster on servers with NVIDIA GPUs vs. non-accelerated servers. To make it easy to get started, NVIDIA and Cloudera recommend two NVIDIA-Certified server configurations that customers can purchase from several vendors:

  • CDP-Ready: For running Apache Spark, a CDP-Ready configuration of NVIDIA-Certified servers with two NVIDIA A30 GPUs per server offers over 5x the performance at less than 50 percent incremental cost relative to modern CPU-only alternatives.
  • AI ready: For customers additionally running machine learning or other AI-related applications, the NVIDIA A100 GPU provides even more performance — as well as acceleration on machine learning and AI training.

Data scientists often develop and refine machine learning and deep learning models on workstations to augment data center resources or help minimize cloud-based compute costs. By using an NVIDIA-Certified workstation, they can transition their work to NVIDIA-Certified servers when it’s time for larger scale prototyping and eventually production, without having to port to a different tool or framework.

New White Paper Describes Value of Certification

When it comes to installing GPUs and SmartNICs in a system, choosing the right server or workstation model and correctly configuring the components and firmware are critical to getting the most out of the investment.

With NVIDIA-Certified Systems, NVIDIA and its partners have already done the work of validating that a particular system is capable of running accelerated workloads well, and they’ve figured out the most optimal hardware configuration.

Misconfiguration can lead to poor performance and even inability to function properly or complete tasks. The certification process ensures that issues such as these are surfaced and resolved for each tested system. We’ve described this and more in a new white paper, Accelerate Compute-Intensive Workloads with NVIDIA-Certified Systems.

Our system partners run a suite of more than 25 tests designed by NVIDIA based on our vast experience with compute, graphics and network acceleration. Each of the tests is chosen to exercise the hardware of the system in a unique and thorough manner, so as many potential configuration issues as possible can be exposed. Some of the tests focus on a single aspect of the hardware, while others stress multiple components, both simultaneously as well as in a multi-step workflow.

With NVIDIA-Certified Systems, enterprises can confidently choose performance-optimized hardware to power their accelerated computing workloads — from the desktop to the data center to the edge.

Learn more about NVIDIA-Certified Systems:

The post NVIDIA-Certified Systems Land on the Desktop appeared first on The Official NVIDIA Blog.

Read More

Leading Lights: NVIDIA Researchers Showcase Groundbreaking Advancements for Real-Time Graphics

Computer graphics and AI are cornerstones of NVIDIA. Combined, they’re bringing creators closer to the goal of cinema-quality 3D imagery rendered in real time.

At a series of graphics conferences this summer, NVIDIA Research is sharing groundbreaking work in real-time path tracing and content creation, much of it based on cutting-edge AI techniques. These projects are tackling the hardest unsolved problems in graphics with new tools that advance the state of the art in real-time rendering.

One goal is improving the realism of rendered light as it passes through complex materials like fur or fog. Another is helping artists more easily turn their creative visions into lifelike models and scenes.

Presented at this week’s SIGGRAPH 2021 — as well as the recent High-Performance Graphics conference and the Eurographics Symposium on Rendering — these research advancements highlight how NVIDIA RTX GPUs make it possible to further the frontiers of photorealistic real-time graphics.

Rendering photorealistic images in real time requires accurate simulation of light, modeling the same laws that govern light in the physical world. The most effective approach known so far, path tracing, requires massive computational resources but can deliver spectacular imagery.

The NVIDIA RTX platform, with dedicated ray-tracing hardware and high-performance Tensor Cores for efficient evaluation of AI models, is tailor made for this task. Yet there are still situations where creating high-fidelity rendered images remains challenging.

Consider, for one, a tiger prowling through the woods.

Seeing the Light: Real-Time Path Tracing

To make a scene completely realistic, creators must render complex lighting effects such as reflections, shadows and visible haze.

In a forest scene, dappled sunlight filters through the leaves on the trees and grows hazy among the water molecules suspended in the foggy air. Rendering realistic real-time imagery of clouds, dusty surfaces or mist like this was once out of reach. But NVIDIA researchers have developed techniques that often compute the visual effect of these phenomena 10x more efficiently.

The tiger itself is both illuminated by sunlight and shadowed by trees. As it strides through the woods, its reflection is visible in the pond below. Lighting these kinds of rich visuals with both direct and indirect reflections can require calculating thousands of paths for every pixel in the scene.

It’s a task far too resource-hungry to solve in real time. So our research team created a path-sampling algorithm that prioritizes the light paths and reflections most likely to contribute to the final image, rendering images over 100x more quickly than before.

AI of the Tiger: Neural Radiance Caching

Another group of NVIDIA researchers achieved a breakthrough in global illumination with a new technique named neural radiance caching. This method uses both NVIDIA RT Cores for ray tracing and Tensor Cores for AI acceleration to train a tiny neural network live while rendering a dynamic scene.

The neural network learns how light is distributed throughout the scene. It evaluates over a billion global illumination queries per second when running on an NVIDIA GeForce RTX 3090 GPU, depicting the tiger’s dense fur with rich lighting detail previously unattainable at interactive frame rates.

Seamless Creation of Tough Textures

As rendering algorithms have progressed, it’s crucial that the 3D content available keeps up with the complexity and richness that the algorithms are capable of.

NVIDIA researchers are diving into this area by developing a variety of techniques that support content creators in their efforts to model rich and realistic 3D environments. One area of focus is on materials with rich geometric complexity, which can be difficult to simulate using traditional techniques.

The weave of a polo shirt, the texture of a carpet, or blades of grass have features often much smaller than the size of a pixel, making it difficult to efficiently store and render representations of them. NVIDIA researchers are addressing this with NeRF-Tex, an approach that uses neural networks to represent these challenging materials and encode how they respond to lighting.

Seeing the Forest for the Trees

Complex geometric objects also vary in their appearance depending on how close they are to the viewer. A leafy tree is one example: Close up, there’s enormous detail in its branches, leaves and bark. From afar, it may appear to be little more than a green blob.

It would be a waste of time to render detailed bark and leaves on a tree that’s on the other end of the forest in a scene. But when zooming in for a close-up, the model should be as realistic as possible.

This is a classic problem in computer graphics known as level of detail. Artists have often been burdened with this challenge, manually modeling multiple versions of each 3D object to enable efficient rendering.

NVIDIA researchers have developed a new approach that generates simplified models automatically based on an inverse rendering method. With it, creators can generate simplified models that are optimized to appear indistinguishable from the originals, but with drastic reductions in their geometric complexity.

NVIDIA at SIGGRAPH 2021 

More than 200 scientists around the globe make up the NVIDIA Research team, focusing on AI, computer graphics, computer vision, self-driving cars, robotics and more. At SIGGRAPH, which runs from Aug. 9-13, our researchers are presenting the following papers:

Don’t miss NVIDIA’s special address at SIGGRAPH on Aug. 10 at 8 a.m. Pacific, revealing our latest technology, demos and more. Catch our Real Time Live demo on Aug. 10 at 4:30 p.m. Pacific to see how NVIDIA Research creates AI-driven digital avatars.

We’re also discussing esports as a real-time graphics challenge in a panel on Aug. 11. An interactive esports demo is available on demand through the SIGGRAPH Emerging Technologies program.

For more, check out the full lineup of NVIDIA events at SIGGRAPH 2021.

The post Leading Lights: NVIDIA Researchers Showcase Groundbreaking Advancements for Real-Time Graphics appeared first on The Official NVIDIA Blog.

Read More

Time to Embark: Autonomous Trucking Startup Develops Universal Platform on NVIDIA DRIVE

Autonomous trucking startup Embark is planning for universal autonomy of commercial semi-trucks, developing one AI platform that fits all.

The company announced today that it will use NVIDIA DRIVE to develop its Embark Universal Interface (EUI), a manufacturer-agnostic platform that includes the compute and multimodal sensors necessary for autonomous trucks. This flexible approach, combined with the high performance of NVIDIA DRIVE, leads to an easily scalable solution for safer, more efficient delivery and logistics.

The EUI is purpose-built to run Embark Driver autonomous driving software for a comprehensive self-driving trucking system.

Most trucking carriers don’t just use one model of vehicle in their fleets. This variety can even extend to vehicles from different manufacturers to haul a wide range of cargo around the world.

The Embark platform will be capable of integrating into trucks from any of the four major truck manufacturers in the U.S. — PACCAR, Volvo, International and Freightliner. By developing a platform that can be retrofitted to such a wide range of vehicles, Embark is helping the trucking industry realize the benefits of AI-powered driving without having to wait for purpose-built vehicles.

And with NVIDIA DRIVE at its core, the platform leverages the best in high-performance AI compute for robust self-driving capabilities.

Scaling Safety

Autonomous vehicles are always learning, taking in vast amounts of data to navigate the unpredictability of the real world, from highways to crowded ports. This rapid processing requires centralized, high-performance AI compute.

The NVIDIA DRIVE platform is the first scalable AI hardware and software platform to enable the production of automated and self-driving vehicles. It combines deep learning, sensor fusion and surround vision for a safe driving experience.

This end-to-end open platform allows for one development investment across an entire fleet, from level 2+ systems all the way to level 5 fully autonomous vehicles. In addition to high-performance, scalable compute, the EUI will have all the necessary functional safety certification to operate without a driver on public roads.

“We need an enormous amount of compute horsepower in our trucks,” said Ajith Dasari, head of Hardware Platform at Embark. “NVIDIA DRIVE meets this need head-on, and allows us to outfit our partners and customers with the best self-driving hardware and software currently on the market.”

A Growing Ecosystem

Embark is already working with leading trucking companies and plans to continue to extend its software and hardware technology.

In April, the company unveiled partnerships with Werner Enterprises, Mesilla Valley Transportation and Bison Transport. It’s also working with shippers including Anheuser Busch InBev and HP, Inc.

Embark plans to list on the public market, announcing a SPAC, or special purpose acquisition company, agreement in June, as well as a partnership with Knight-Swift Transportation. The autonomous trucking company will join the ranks of NVIDIA DRIVE ecosystem members who have collectively raised more than $8 billion via public listings.

And just like the trucks running on its Embark Universal Interface, the company is tapping the power of NVIDIA DRIVE to keep traveling further and more intelligently.

The post Time to Embark: Autonomous Trucking Startup Develops Universal Platform on NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Cattle-ist for the Future: Plainsight Revolutionizes Livestock Management with AI

Computer vision and edge AI are looking beyond the pasture.

Plainsight, a San Francisco-based startup and NVIDIA Metropolis partner, is helping the meat processing industry improve its operations — from farms to forks. By pairing Plainsight’s vision AI platform and NVIDIA GPUs to develop video analytics applications, the company’s system performs precision livestock counting and health monitoring.

With animals such as cows that look so similar, frequently shoulder-to-shoulder and moving quickly, inaccuracies in livestock counts are common in the cattle industry, and often costly.

On average, the cost of a cow in the U.S. is between $980 and $1,200, and facilities process anywhere between 1,000 to 5,000 cows per day. At this scale, even a small percentage of inaccurate counts equates to hundreds of millions of dollars in financial losses, nationally.

“By applying computer vision powered by edge AI and NVIDIA Metropolis, we’re able to automate what has traditionally been a very manual process and remove the uncertainty that comes with human counting,” said Carlos Anchia, CEO of Plainsight. “Organizations begin to optimize existing revenue streams when accuracy can be operationally efficient.”

Plainsight is working with JBS USA, one of the world’s largest food companies, to integrate vision AI into its operational processes. Vision AI-powered cattle counting was among the first solutions to be implemented.

At JBS, cows are counted by fixed-position cameras, connected via a secured private network to Plainsight’s vision AI edge application, which detects, tracks and counts the cows as they move past.

Plainsight’s models count livestock with over 99.5 percent accuracy — about two percentage points better than manual livestock counting by humans in the same conditions, according to the company.

 

For a vision AI solution to be widely adopted by an organization, the accuracy must be higher than humans performing the same activity. By monitoring and tracking each individual animal, the models simplify an otherwise complex process.

Highly robust and accurate computer vision models are only a portion of the cattle counting solution. Through continued collaboration with JBS’s operations and innovation teams, Plainsight provided a path to the digital transformation required to more accurately provide accountability when receiving livestock at scale and thus ensure that the payment for livestock received is as accurate as possible.

Higher Accuracy with GPMoos

For JBS, the initial proof of value involved building models and deploying on an NVIDIA Jetson AGX Xavier Developer Kit.

After quickly achieving nearly 100 percent accuracy levels with their models, the teams moved into a full pilot phase. To augment the model to handle new and often challenging environmental conditions, Plainsight’s AI platform was used to quickly and easily annotate, build and deploy model improvements in preparation for a nationwide rollout.

As a member partner of NVIDIA Metropolis, an application framework that brings visual data and AI together, Plainsight continues to develop and improve models and AI pipelines to enable a national rollout with the U.S. division of JBS.

There, Plainsight uses a technology stack built on the NVIDIA EGX platform, incorporating NVIDIA-Certified Systems with NVIDIA T4 GPUs. Plainsight’s application processes multiple video streams per GPU in real time to count and monitor livestock as part of managing the accounting of livestock when received.

“Innovation is fundamental to the JBS culture, and the application of AI technology to improve efficiencies for daily work routines is important,” said Frederico Scarin do Amaral, Senior Manager Business Solutions of JBS USA. “Our partnership with Plainsight enhances the work of our team members and ensures greater accuracy of livestock count, improving our operations and efficiency, as well as allowing for continual improvements of animal welfare.”

Milking It

Inaccurate counting is only part of the problem the industry faces, however. Visual inspection of livestock is arduous and error-prone, causing late detection of diseases and increasing health risks to other animals.

The same symptoms humans can identify by looking at an animal, such as gait and abnormal behavior, can be approximated by computer vision models built, trained and managed through Plainsight’s vision AI platform.

The models identify symptoms of particular diseases, based on the gait and anomalies in how livestock behave when exiting transport vehicles, in a pen or in feeding areas.

“The cameras are an unblinking source of truth that can be very useful in identifying and alerting to problems otherwise gone unnoticed,” said Anchia. “The combination of vision AI, cameras and Plainsight’s AI platform can help enhance the vigilance of all participants in the cattle supply chain so they can focus more on their business operations and animal welfare improvements as opposed to error-prone manual counting.”

Legen-Dairy

In addition to a variety of other smart agriculture applications, Plainsight is using its vision AI platform to monitor and track cattle on the blockchain as digital assets.

The company is engaged in a first-of-its-kind co-innovation project with CattlePass, a platform that generates a secure and unique digital record of individual livestock, also known as a non-fungible token, or NFT.

Plainsight is applying its advanced vision AI models and platform for monitoring cattle health. The suite of advanced technologies, including genomics, health and proof-of-life records, will provide 100 percent verifiable proof of ownership and transparency into a complete living record of the animal throughout its lifecycle.

Cattle ranchers will be able to store the NFTs in a private digital wallet while collecting and adding metadata: feed, heartbeat, health, etc. This data can then be shared with permissioned viewers such as inspectors, buyers, vets and processors.

The data will remain with each animal throughout its life through harvest, and data will be provided with a unique QR code printed on the beef packaging. This will allow for visibility into the proof of life and quality of each cow, giving consumers unprecedented knowledge about its origin.

The post Cattle-ist for the Future: Plainsight Revolutionizes Livestock Management with AI appeared first on The Official NVIDIA Blog.

Read More

Archaeologist Digs Into Photogrammetry, Creates 3D Models With NVIDIA Technology

Archaeologist Daria Dabal is bringing the past to life, with an assist from NVIDIA technology.

Dabal works on various archaeological sites in the U.K., conducting field and post-excavation work. Over the last five years, photogrammetry — the use of photographs to create fully textured 3D models — has become increasingly popular in archaeology. Dabal has been expanding her skills in this area to create and render high-quality models of artifacts and sites.

With the help of her partner, Simon Kotowicz, Dabal is taking photogrammetry scans to the next level with a pair of NVIDIA graphics technologies.

Using NVIDIA RTX GPUs, Dabal can accelerate workflows to create and interact with 3D models, from recreating missing elements to adding animations or dropping them into VR. And with the NVIDIA Omniverse real-time collaboration and simulation platform, she can build stunning scenes around her projects using the platform’s library of existing assets.

All for the (Photo)Gram

Dabal quickly learned that good photogrammetry requires a good dataset. It’s important to take photos in the right pattern, so that every area of the site, monument or artifact is covered.

Once she has all the images she needs, Dabal builds the 3D models using Agisoft Metashape, a tool for photogrammetry pipelines. Dabal loads all her photos into the application, and the software turns that data into a point cloud, which is a rough collection of dots that represent the 3D model.

Recently, Dabal and Kotowicz were approached by The Flow State XR, a company that delivers immersive experiences and applications. They were tasked with a new photogrammetry project that rocked their world: creating a 3D model of The Crobar, an iconic, heavy-metal bar located in the heart of Soho in London.

The Flow State XR sent images of The Crobar to the duo, who used the photos to model the bar from scratch, then created texture maps using Adobe Photoshop, Illustrator and Substance Painter. Dabal and Kotowicz are currently finishing the 3D model, but once the project is complete, The Flow State XR plans to use it as an interactive mobile app and a VR hangout for music fans.

The Crobar VR scene. Image courtesy of Dabal.

RTX Shapes 3D Modeling Workflows

Dabal uses an NVIDIA Quadro RTX 4000 GPU to significantly speed up her 3D modeling and rendering workflows. Processing a sparse cloud model with her older generation GPU would take two days, she said. With the upgraded RTX card, a similar point cloud takes only 10 hours.

With NVIDIA RTX, the millions of points on a model can be rotated and zoomed much more easily. The team can also use their 4K monitor to view the models, which they couldn’t do previously because it was difficult to navigate around the point clouds.

Dabal and Kotowicz have also experienced faster performance in creative apps like Autodesk  3ds Max. They can iterate quicker and see textures in graphics without needing to render as often.

“The NVIDIA RTX card has helped us achieve the model we need much faster,” said Dabal. “We’re spending less time in front of the workstation, and we’re getting to the rendering stage a lot quicker now.”

Textured 3D model of the Neath Abbey Ironworks. Image courtesy of Dabal.

Omniverse Makes Space for Sharing Assets, Building Scenes 

Dabal got the chance to use the advanced features of NVIDIA Omniverse when she entered the first “Create With Marbles” design competition. After exploring the platform, Dabal sees great potential in how it will transform traditional workflows and images in archaeology.

Dabal’s submission for the NVIDIA Omniverse “Create With Marbles” design competition.

Currently, there isn’t a tool that enables archaeologists to quickly upload assets in one place, and share or collaborate with others on the same projects.

With an open platform like Omniverse, archaeologists have a virtual space where they can easily upload photogrammetry artifacts and share assets with one another, no matter what location they’re working from. Or they could place models in Omniverse and create a stunning scene by adding extra elements, like trees or farm fields.

“Right now, most archaeological 3D models look fake, floating in their black backgrounds. It would take too much time to add the extras, but it would be so easy in Omniverse,” said Dabal. “When I was in Omniverse, I really enjoyed taking premade objects and moving them around to create a scene. It was super easy.”

Dabal says that if archeologists had access to a library of extra assets, as well as all their own photogrammetry scans, “it would be game-changing.”

With Omniverse, archaeologists can share their projects with others around the world as well as simulate fire or weather conditions to bring their 3D models and sites to life.

Explore more of Dabal’s work, and learn more about NVIDIA RTX and NVIDIA Omniverse.

And join NVIDIA at SIGGRAPH, where we’ll showcase the technologies driving the future of graphics. We’ll announce the winners of the latest “Create With Marbles: Marvelous Machines” contest winner, and premiere a documentary highlighting how Omniverse was used to create the NVIDIA GTC 2021 keynote.

The post Archaeologist Digs Into Photogrammetry, Creates 3D Models With NVIDIA Technology appeared first on The Official NVIDIA Blog.

Read More

Ready for Prime Time: Plus to Deliver Autonomous Truck Systems Powered by NVIDIA DRIVE to Amazon

Your Amazon Prime delivery just got smarter.

Autonomous trucking company Plus recently signed a deal with Amazon to provide at least 1,000 self-driving systems to retrofit on the e-commerce giant’s delivery fleet. These systems are powered by NVIDIA DRIVE Xavier for high-performance, energy-efficient and centralized AI compute.

The agreement follows Plus’ announcement of going public via SPAC, or special acquisition company.

Amazon — which leads the U.S. e-tail market, counting $386 billion in net revenue in 2020 — has been investing heavily in autonomous and electric vehicle technology. Last year, it acquired robotaxi company and NVIDIA DRIVE ecosystem member Zoox for $1.3 billion.

These deals signal the transition to autonomous systems in both personal and commercial transportation at a massive scale.

An A-Plus Platform

The current generation PlusDrive autonomous trucking platform was developed for level 4 autonomous driving with a human driver still at the wheel, using NVIDIA DRIVE Xavier system-on-a-chip at its core.

Xavier is the first-ever production, automotive-grade SoC for autonomous capabilities. It incorporates six different types of processors, including a CPU, GPU, deep learning accelerator, programmable vision accelerator, image signal processor and stereo/optical flow accelerator.

Architected for safety, Xavier incorporates the redundancy and diversity necessary for safe autonomous operation.

This high-performance compute enables the PlusDrive system to perform surround perception with an array of radar, lidar and camera sensors, running a variety of deep neural networks simultaneously and in real time.

Trucking Ahead

Plus’s deal with Amazon is just the beginning of the march toward widespread autonomous delivery.

The self-driving company has already announced plans to transition to the next generation of AI compute, NVIDIA DRIVE Orin, beginning in 2022. Plus has received more than 7,000 orders and pre-orders for this upcoming system.

Additionally, Amazon has been granted a warrant to buy a 20 percent stake in Plus after they spend more than $150 million, opening up the possibility for even deeper integration of the company’s technology with the e-retailer’s delivery fleet.

And with NVIDIA DRIVE at their core, these autonomous systems will be able to handle the AI processing necessary to deliver safe, efficient and continuously improving trucks at scale.

The post Ready for Prime Time: Plus to Deliver Autonomous Truck Systems Powered by NVIDIA DRIVE to Amazon appeared first on The Official NVIDIA Blog.

Read More

August Arrivals: GFN Thursday Brings 34 Games to GeForce NOW This Month

It’s a new month for GFN Thursday, which means a new month full of games on GeForce NOW.

August brings a wealth of great new PC game launches to the cloud gaming service, including King’s Bounty II, Humankind and NARAKA: BLADEPOINT.

In total, 13 titles are available to stream this week. They’re just a portion of the 34 new games coming to the service this month.

Members will also get to stream upcoming content updates for popular free-to-play titles like Apex Legends and Tom Clancy’s Rainbow Six Siege as soon as they release.

Fit for a King

It’s time to save a kingdom. Members will be able to stream King’s Bounty II (Steam) when it releases for PC later this month on GeForce NOW.

King's Bounty II on GeForce NOW
Be a savior to a kingdom overshadowed by conspiracies, sabotage and necromancy in King’s Bounty II.

Darkness has descended over the world of Nostria in this exciting RPG sequel. Gamers will be able to play as one of three heroes, rescuing and building a personal army in a journey of leadership, survival and sacrifice. Fight for the future and outsmart enemies in turn-based combat. Every action has profound and lasting consequences in the fight to bring peace and order to the land.

Be the kingdom’s last hope and live out an adventure August 24. Preorder on Steam to get some exclusive bonuses.

More Fun for Free This Month

This month also comes with new content for some of the most popular free-to-play titles streaming on GeForce NOW. Members can look forward to experiencing the latest in Apex Legends and Tom Clancy’s Rainbow Six Siege.

Apex Legends: Emergence on GeForce NOW
Get ready, Legends. It’s a new season. “Apex Legends Emergence” has arrived.

“Apex Legends: Emergence,” the latest season of the wildly popular free-to-play game from Respawn and EA, launched on August 3 and brought in a new Legend, weapon and Battle Pass as well as some awesome map updates.

The newest legend, Seer, is here and ready to spot opportunities that others may miss. Players can also enjoy a new midrange weapon, the Rampage LMG, a slower but more powerful variation of the Spitfire. To top it all off, the newest map updates reveal a familiar landscape torn at the seams. Decimated World’s Edge is available now on Apex Legends and streaming on GeForce NOW.

The latest event in Tom Clancy’s Rainbow Six Siege features a new time-limited gameplay mode, a challenge to unlock a free Nomad Hive Mind set, and more. Year 6 Season 2 kicked off with a special “Rainbow Six Siege: Containment” event that sets players in the Consulate map overrun by the Chimera parasite in a new game mode called Nest Destruction. Members will be able to stream the Containment event from August 3 to August 24.

Here This Week

A Plague Tale: Innocence on GeForce NOW
Follow the grim tale of two siblings through some of the darkest hours in history in A Plague Tale: Innocence.

Starting off the month, members can look for the following titles available to stream this GFN Thursday:

August’s Newest Additions

NARAKA: BLADEPOINT on GeForce NOW
Experience a game where melee meets battle royale in NARAKA: BLADEPOINT.

This month is packed with more games coming to GeForce NOW over the course of August, including nine new titles:

More from July

Crowfall on GeForce NOW
Massive PVP sieges? Check. Raiding parties? Check. Playing as an epic Half-Giant Champion? Heck yes. Check out Crowfall, streaming on GeForce NOW.

On top of the 36 games that were announced and released in July, an extra 24 titles joined the GeForce NOW library over the month:

Finally, here’s a special question from our friends on the GeForce NOW Twitter feed:

𝙒𝙖𝙣𝙩𝙚𝙙: 𝘼 𝙨𝙩𝙧𝙖𝙩𝙚𝙜𝙞𝙘 𝙘𝙝𝙖𝙡𝙡𝙚𝙣𝙜𝙚.

Drop your favorite strategy games below. 👇

🌩 NVIDIA GeForce NOW (@NVIDIAGFN) August 4, 2021

The post August Arrivals: GFN Thursday Brings 34 Games to GeForce NOW This Month appeared first on The Official NVIDIA Blog.

Read More