Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network

In 2019, the U.S. Postal Service had a need to identify and track items in its torrent of more than 100 million pieces of daily mail.

A USPS AI architect had an idea. Ryan Simpson wanted to expand an image analysis system a postal team was developing into something much broader that could tackle this needle-in-a-haystack problem.

With edge AI servers strategically located at its processing centers, he believed USPS could analyze the billions of images each center generated. The resulting insights, expressed in a few key data points, could be shared quickly over the network.

The data scientist, half a dozen architects at NVIDIA and others designed the deep-learning models needed in a three-week sprint that felt like one long hackathon. The work was the genesis of the Edge Computing Infrastructure Program (ECIP, pronounced EE-sip), a distributed edge AI system that’s up and running on the NVIDIA EGX platform at USPS today.

An AI Platform at the Edge

It turns out edge AI is a kind of stage for many great performances. ECIP is already running a second app that acts like automated eyes, tracking items for a variety of business needs.

USPS camera gantry
Cameras mounted on the sorting machines capture addresses, barcodes and other data such as hazardous materials symbols. Courtesy of U.S. Postal Service.

“It used to take eight or 10 people several days to track down items, now it takes one or two people a couple hours,” said Todd Schimmel, the manager who oversees USPS systems including ECIP, which uses NVIDIA-Certified edge servers from Hewlett-Packard Enterprise.

Another analysis was even more telling. It said a computer vision task that would have required two weeks on a network of servers with 800 CPUs can now get done in 20 minutes on the four NVIDIA V100 Tensor Core GPUs in one of the HPE Apollo 6500 servers.

Today, each edge server processes 20 terabytes of images a day from more than 1,000 mail processing machines. Open source software from NVIDIA, the Triton Inference Server, acts as the digital mailperson, delivering the AI models each of the 195 systems need —  when and how they need it.

Next App for the Edge

USPS put out a request for what could be the next app for ECIP, one that uses optical character recognition (OCR) to streamline its imaging workflow.

“In the past, we would have bought new hardware, software — a whole infrastructure for OCR; or if we used a public cloud service, we’d have to get images to the cloud, which takes a lot of bandwidth and has significant costs when you’re talking about approximately a billion images,” said Schimmel.

Today, the new OCR use case will live as a deep learning model in a container on ECIP managed by Kubernetes and served by Triton.

The same systems software smoothed the initial deployment of ECIP in the early weeks of the pandemic. Operators rolled out containers to get the first systems running as others were being delivered, updating them as the full network of nearly nodes was installed.

“The deployment was very streamlined,” Schimmel said. “We awarded the contract in September 2019, started deploying systems in February 2020 and finished most of the hardware by August — the USPS was very happy with that,” he added.

Triton Expedites Model Deliveries

Part of the software magic dust under ECIP’s hood, Triton automates the delivery of different AI models to different systems that may have different versions of GPUs and CPUs supporting different deep-learning frameworks. That saves a lot of time for edge AI systems like the ECIP network of almost 200 distributed servers.

NVIDIA DGX servers at USPS
AI algorithms were developed on NVIDIA DGX servers at a U.S. Postal Service Engineering facility. Courtesy of NVIDIA.

The app that checks for mail items alone requires coordinating the work of more than a half dozen deep-learning models, each checking for specific features. And operators expect to enhance the app with more models enabling more features in the future.

“The models we have deployed so far help manage the mail and the Postal Service — it helps us maintain our mission,” Schimmel said.

A Pipeline of Edge AI Apps

So far, departments across USPS from enterprise analytics to finance and marketing have spawned ideas for as many as 30 applications for ECIP. Schimmel hopes to get a few of them up and running this year.

One would automatically check if a package carries the right postage for its size, weight and destination. Another one would automatically decipher a damaged barcode and could be online as soon as this summer.

“This has a benefit for us and our customers, letting us know where a specific parcel is at — it’s not a silver bullet, but it will fill a gap and boost our performance,” he said.

The work is part of a broader effort at USPS to explore its digital footprint and unlock the value of its data in ways that benefit customers.

“We’re at the very beginning of our journey with edge AI. Every day, people in our organization are thinking of new ways to apply machine learning to new facets of robotics, data processing and image handling,” he said.

Learn more about the benefits of edge computing and the NVIDIA EGX platform, as well as how NVIDIA’s edge AI solutions are transforming every industry.

Pictured at top: Postal Service employees perform spot checks to ensure packages are properly handled and sorted. Courtesy of U.S. Postal Service.

The post Sharpening Its Edge: U.S. Postal Service Opens AI Apps on Edge Network appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday: 61 Games Join GeForce NOW Library in May

May’s shaping up to be a big month for bringing fan-favorites to GeForce NOW. And since it’s the first week of the month, this week’s GFN Thursday is all about the games members can look forward to joining the service this month.

In total, we’re adding 61 games to the GeForce NOW library in May, including 17 coming this week.

Joining This Week

This week’s additions include games from Remedy Entertainment, a classic Wild West FPS and a free title on Epic Games Store. Here are a few highlights:

Alan Wake on GeForce NOW

Alan Wake (Steam)

A Dark Presence stalks the small town of Bright Falls, pushing Alan Wake to the brink of sanity in his fight to unravel the mystery and save his love.

Call of Juarez: Gunslinger on GeForce NOW

Call of Juarez: Gunslinger (Steam)

From the dust of a gold mine to the dirt of a saloon, Call of Juarez Gunslinger is a real homage to Wild West tales. Live the epic and violent journey of a ruthless bounty hunter on the trail of the West’s most notorious outlaws.

Pine (Free on Epic Games Store until May 13)

An open-world action-adventure game set in a simulated world in which humans never reached the top of the food chain. Fight with or against a variety of species as you make your way to a new home for your human tribe.

Members can also look for the following titles later today:

MotoGP21 on GeForce NOW
Push your bike to the limit in MotoGP21, joining the GeForce NOW library this week.

More in May

This week is just the beginning. We have a giant list of titles joining GeForce NOW throughout the month, including:

In Case You Missed It

In April, we added 27 more titles than shared on April 1. Check out these games, streaming straight from the cloud:

Time to start planning your month, members. What are you going to play? Let us know on Twitter or in the comments below.

The post GFN Thursday: 61 Games Join GeForce NOW Library in May appeared first on The Official NVIDIA Blog.

Read More

Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale

With only one U.S. state without a Walmart supercenter — and over 4,600 stores across the country — the retail giant’s prediction analytics work with data on an enormous scale.

Grant Gelven, a machine learning engineer at Walmart Global Tech, joined NVIDIA AI Podcast host Noah Kravitz for the latest episode of the AI Podcast.

Gelven spoke about the big data and machine learning methods making it possible to improve everything from the customer experience to stocking to item pricing.

Gelven’s most recent project has been a dynamic pricing system, which reduces excess food waste by pricing perishable goods at a cost that ensures they’ll be sold. This improves suppliers’ ability to deliver the correct volume of items, the customers’ ability to purchase, and lessens the company’s impact on the environment.

The models that Gelven’s team work on are extremely large, with hundreds of millions of parameters. They’re impossible to run without GPUs, which are helping accelerate dataset preparation and training.

The improvements that machine learning have made to Walmart’s retail predictions reach even farther than streamlining business operations. Gelven points out that it’s ultimately helped customers worldwide get the essential goods they need, by allowing enterprises to react to crises and changing market conditions.

Key Points From This Episode:

  • Gelven’s goal for enterprise AI and machine learning models isn’t just to solve single use case problems, but to improve the entire customer experience through a complex system of thousands of models working simultaneously.
  • Five years ago, the time from concept to model to operations took roughly a year. Gelven explains that GPU acceleration, open-source software, and various other new tools have drastically reduced deployment times.

Tweetables:

“Solving these prediction problems really means we have to be able to make predictions about hundreds of millions of distinct units that are distributed all over the country.” — Grant Gelven [3:17]

“To give customers exactly what they need when they need it, I think is probably one of the most important things that a business or service provider can do.” — Grant Gelven [16:11]

You Might Also Like:

Focal Systems Brings AI to Grocery Stores

CEO Francois Chaubard explains how Focal Systems is applying deep learning and computer vision to automate portions of retail stores to streamline store operations and get customers in and out more efficiently.

Credit Check: Capital One’s Kyle Nicholson on Modern Machine Learning in Finance

Kyle Nicholson, a senior software engineer at Capital One, talks about how modern machine learning techniques have become a key tool for financial and credit analysis.

HP’s Jared Dame on How AI, Data Science Are Driving Demand for Powerful New Workstations

Jared Dame, Z by HP’s director of business development and strategy for AI, data science and edge technologies, speaks about the role HP’s workstations play in cutting-edge AI and data science.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Putting the AI in Retail: Walmart’s Grant Gelvin on Prediction Analytics at Supercenter Scale appeared first on The Official NVIDIA Blog.

Read More

AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC

Major tech conferences are typically hosted in highly industrialized countries. But the appetite for AI and data science resources spans the globe — with an estimated 3 million developers in emerging markets.

Our recent GPU Technology Conference — virtual, free to register, and featuring 24/7 content — for the first time featured a dedicated track on AI in emerging markets. The conference attracted a record 20,000+ developers, industry leaders, policymakers and researchers in emerging markets across 95 countries.

These registrations accounted for around 10 percent of all signups for GTC. We saw a 6x jump from last spring’s GTC in registrations from Latin America, a 10x boost in registrations from the Middle East and a nearly 30x jump in registrations from African countries.

Nigeria alone accounted for more than 1,300 signups, and developers from 30 countries in Latin America and the Caribbean registered for the conference.

These attendees weren’t simply absorbing high-level content — they were leading conversations.

Dozens of startup founders from emerging markets shared their innovations. Community leaders, major tech companies and nonprofits discussed their work to build resources for developers in the Caribbean, Latin America and Africa. And hands-on labs, training and networking sessions offered opportunities for attendees to boost their skills and ask questions of AI experts.

We’re still growing our emerging markets initiatives to better connect with developers worldwide. As we do so, we’ll incorporate three key takeaways from this GTC:

  1. Remove Barriers to Access

While in-person AI conferences typically draw attendees from around the world, these opportunities aren’t equally accessible to developers from every region.

Though Africa has the world’s fastest-growing community of AI developers, visa challenges have in recent years prevented some African researchers from attending AI conferences in the U.S. and Canada. And the cost of conference registrations, flights and hotel accommodations in major tech hubs can be prohibitive for many, even at discounted rates.

By making GTC21 virtual and free to register, we were able to welcome thousands of attendees and presenters from countries including Kenya, Zimbabwe, Trinidad and Tobago, Ghana and Indonesia.

  1. Spotlight Region-Specific Challenges, Successes

Opening access is just the first step. A developer from Nigeria faces different challenges than one in Norway, so global representation in conference speakers can help provide a diversity of perspectives. Relevant content that’s localized by topic or language can help cater to the unique needs of a specific audience and market.

The Emerging Markets Pavilion at GTC, hosted by NVIDIA Inception, our acceleration platform for AI startups, featured companies developing augmented reality apps for cultural tourism in Tunisia, smart video analytics in Lebanon and data science tools in Mexico, to name a few examples.

Several panel discussions brought together public sector reps, United Nations leads, community leaders and developer advocates from NVIDIA, Google, Amazon Web Services and other companies for discussions on how to bolster AI ecosystems around the world. And a session on AI in Africa focused on ways to further AI and data science education for a community that mostly learns through non-traditional pathways.

  1. Foster Opportunities to Learn and Connect

Developer groups in emerging markets are growing rapidly, with many building skills through online courses or community forums, rather than relying on traditional degree programs. One way we’re supporting this is by sponsoring AI hackathons in Africa with Zindi, an online forum that brings together thousands of developers to solve challenges for companies and governments across the continent.

The NVIDIA Developer Program includes tens of thousands of members from emerging markets — but there are hundreds of thousands more developers in these regions poised to take advantage of AI and accelerated applications to power their work.

To learn more about GTC, watch the replay of NVIDIA CEO Jensen Huang’s keynote address. Join the NVIDIA Developer Program for access to a wide variety of tools and training to accelerate AI, HPC and advanced graphics applications.

The post AI Gone Global: Why 20,000+ Developers from Emerging Markets Signed Up for GTC appeared first on The Official NVIDIA Blog.

Read More

Around the World in AI Ways: Video Explores Machine Learning’s Global Impact

You may have used AI in your smartphone or smart speaker, but have you seen how it comes alive in an artist’s brush stroke, how it animates artificial limbs or assists astronauts in Earth’s orbit?

The latest video in the “I Am AI” series — the annual scene setter for the keynote at NVIDIA’s GTC — invites viewers on a journey through more than a dozen ways this new and powerful form of computing is expanding horizons.

Perhaps your smart speaker woke you up this morning to music from a distant radio station. Maybe you used AI in your smartphone to translate a foreign phrase in a book you’re reading.

A View of What’s to Come

These everyday use cases are becoming almost commonplace. Meanwhile, the frontiers of AI are extending to advance more critical needs.

In healthcare, the Bionic Vision Lab at UC Santa Barbara uses deep learning and virtual prototyping on NVIDIA GPUs to develop models of artificial eyes. They let researchers explore the potential and limits of a design for artificial eyes by viewing a model through a virtual-reality headset.

At Canada’s University of Waterloo, researchers are using AI to develop autonomous controls for exoskeleton legs that help users walk, climb stairs and avoid obstacles. Wearable cameras filter video through AI models trained on NVIDIA GPUs to recognize surrounding features such as stairs and doorways and then determine the best movements to take.

“Similar to autonomous cars that drive themselves, we’re designing autonomous exoskeletons that walk for themselves,” Brokoslaw Laschowski, a lead researcher on the ExoNet project, said in a recent blog.

Watch New Worlds Come to Life

In “I Am AI,” we meet Sofia Crespo who calls herself a generative artist. She blends and morphs images of jellyfish, corals and insects in videos that celebrate the diversity of life, using an emerging form of AI called generative adversarial networks and neural network models like GPT-2.

Sofia Crespo uses GANs
A fanciful creature created by artist Sofia Crespo using GANs.

“Can we use these technologies to dream up new biodiversities that don’t exist? What would these creatures look like?” she asks in a separate video describing her work.

See How AI Guards Ocean Life

“I Am AI” travels to Hawaii, Morocco, the Seychelles and the U.K., where machine learning is on the job protecting marine life from very real threats.

In Africa, the ATLAN Space project uses a fleet of autonomous drones with AI-powered computer vision to detect illegal fishing and ships dumping oil into the sea.

On the other side of the planet, the Maui dolphin is on the brink of extinction, with only 63 adults in the latest count. A nonprofit called MAUI63 uses AI in drones to identify individuals by their fin markings, tracking their movements so policy makers can take steps such as creating marine sanctuaries to protect them.

Taking the Planet’s Temperature

AI is also at work developing the big picture in planet ecology.

The video spotlights the Plymouth Marine Laboratory in the UK, where researchers use an NVIDIA DGX system to analyze data gathered on the state of our oceans. Their work contributes to the U.N. Sustainable Development Goals and other efforts to monitor the health of the seas.

A team of Stanford researchers is using AI to track wildfire risks. The video provides a snapshot of their work opening doors to deeper understandings of how ecosystems are affected by changes in water availability and climate.

Beam Me Up, NASA

The sky’s the limit with the Spaceborne Computer-2, a supercharged system made by Hewlett Packard Enterprise now installed in the International Space Station. It packs NVIDIA GPUs that astronauts use to monitor their health in real time and track objects in space and on Earth like a cosmic traffic copter.

The ISS now packs an NVIDIA GPU.
Astronauts use Spaceborne Computer-2 to run AI experiments on the ISS.

One of the coolest things about Spaceborne Computer-2 is you can suggest an experiment to run on it. HPE and NASA extended an open invitation for proposals, so Earth-bound scientists can expand the use of AI in space.

If these examples don’t blow the roof off your image of where machine learning might go next, check out the full “I Am AI” video below. It includes several more examples of other AI projects in art, science and beyond.

The post Around the World in AI Ways: Video Explores Machine Learning’s Global Impact appeared first on The Official NVIDIA Blog.

Read More

Update Complete: GFN Thursday Brings New Features, Games and More

No Thursday is complete without GFN Thursday, our weekly celebration of the news, updates and great games GeForce NOW members can play — all streaming from the cloud across nearly all of your devices.

This week’s exciting updates to the GeForce NOW app and experience Include updated features, faster session loading and a bunch of new games joining the GFN library.

… Better, Faster, Stronger

There’s a lot happening behind the scenes as our team continuously works to improve your cloud-gaming experience with each session.

Our cloud-gaming engineers work to further improve your streaming sessions, optimize games and develop new features. We also continually refine the user experience in the GeForce NOW app, which is now rolling out version 2.0.29 with several improvements.

Game in the Fast Lane

From the Settings pane in the GeForce NOW app, you can link your Epic Games Store account to take advantage of some new features.
From the Settings pane in the GeForce NOW app, you can link your Epic Games Store account to take advantage of some new features.

One feature we’re currently testing with our Founders and Priority members is preloading, which loads parts of your game before you arrive so your launch times will be faster. Members testing this feature should see sessions launch up to a minute faster from the moment they click play in the GeForce NOW app. Free members are not guaranteed preloaded sessions and may see slightly longer startup times.

To enable the benefits of preloading, we’re also testing a new account linking feature which lets you play games without having to login into your game store account. Both the preloading and account linking features are currently enabled for Fortnite’s PC build on GeForce NOW. We anticipate an expansion of these features to more GeForce NOW games in the future.

PC, macOS and Chromebook users can enable the new account linking features from a new tile on the My Library row in-app. This takes you to the Settings pane, where you can turn on account linking for Fortnite under Connections. Once complete, you won’t need to log in to your Epic Account to play Fortnite’s PC build on any other supported GeForce NOW platform, and you’ll be eligible for preloaded sessions.

Find What You’re Looking For

We’re also improving search results in the app to make managing libraries easier and get members in games faster. Searching for games to add to libraries will now return full page search results, providing easier access to the game’s details and a quicker way to add it to your library.

The newest version of the GeForce NOW app includes improved search, account linking for Epic Games Store, and a whole lot more.

If you’re playing on GeForce NOW from a Chrome browser, we’ve recently added our in-game overlay.  The overlay lets members configure many in-stream features, such as FreeStyle filters, network labels, microphone toggles and ending game sessions. To bring up the overlay, using Ctrl + G for PC and Chromebook, or CMD + G for macOS.

And no GeForce NOW app update would be complete without squashed bugs. To get the full lowdown, check out the version 2.0.29 Release Highlights from the Settings pane in the app.

These updates are just a few of the improvements we’re working on. We have a ton more in store, and every update is designed to make sure that when GFN members play their favorite PC games from our cloud servers, they’re getting the best possible experience.

Rolling in the Deep (Silver)

We recently spoke with our friends at Deep Silver about new updates coming for KING Art Games’ Iron Harvest and 4A Games’ Metro Exodus: Enhanced Edition, both of which will be supported on GeForce NOW. Catch up on all the details here.

Get Your Game On

The latest in the classic R-Type series comes to GeForce NOW this week.

Finally, below are the games joining the GeForce NOW library this week. 

What do you think of the newest GeForce NOW updates? Let us know on Twitter or in the comments below.

The post Update Complete: GFN Thursday Brings New Features, Games and More appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday: Rolling in the Deep (Silver) with Major ‘Metro Exodus’ and ‘Iron Harvest’ Updates

GFN Thursday reaches a fever pitch this week as we take a deeper look at two major updates coming to GeForce NOW from Deep Silver in the weeks ahead.

Catching Even More Rays

Metro Exodus was one of the first RTX games added to GeForce NOW. It’s still one of the most-played RTX games on the service. Back in February, developer 4A Games shared news of an Enhanced Edition coming to PC that takes advantage of a new Fully Ray-Traced Lighting Pipeline.

Today, we can share that it’s coming to GeForce NOW, day-and-date, on May 6.

The PC Enhanced Edition features significant updates to Metro Exodus’ real-time ray tracing implementations. Players will see improvements to the groundbreaking Ray-Traced Global Illumination (RTGI) featured in the base game, as well as new updates for the Ray-Traced Emissive Lighting techniques pioneered in “The Two Colonels” expansion.

The PC Enhanced Edition also includes additional ray-tracing features, like Advanced Ray-Traced Reflections, and support for NVIDIA DLSS 2.0 on NVIDIA hardware — including GeForce NOW.

The list of RTX features coming to the PC Enhanced Edition is massive:

  • Fully ray-traced lighting throughout — every light sources is now ray traced
  • Next-gen ray tracing and denoising
  • Next-gen temporal reconstruction technology
  • Per-pixel ray-traced global illumination
  • Ray-traced emissive surfaces with area shadows
  • Infinite number of ray-traced light boundaries
  • Atmosphere and transparent surfaces receiving ray-traced bounced lighting
  • Full ray-traced lighting model support with color bleeding and for every light source
  • Advanced ray-traced reflections
  • DX12 ultimate support, including DXR 1.1 and variable rate shading
  • GPU FP16 support and thousands of optimized shaders
  • Support for DLSS 2.0
  • Addition of FOV (field of view) slider to main game options

In short, the game is going to look even more amazing. And, starting next week, members who own Metro Exodus will have access to the update on GeForce NOW. But remember, to access the enhanced visuals you’ll need to be a Founder or Priority member.

Don’t own Metro Exodus yet? Head to Steam or the Epic Games Store and get ahead of the game.

Metro Exodus PC Enhanced Edition includes updated real-time ray tracing across both the base game and the expansion.
Metro Exodus PC Enhanced Edition includes updated real-time ray tracing that GeForce NOW Founders and Priority members can experience across nearly all of their devices.

A Strategic Move

Iron Harvest, the classic real-time strategy game with an epic single-player campaign, multiplayer and coop, set in the alternate reality of 1920+, is getting a new DLC on May 27. Dubbed “Operation Eagle,” the update brings a new faction to the game’s alternate version of World War I: USA.

You’ll guide this new faction through seven new single-player missions, while learning how to use the game’s new Aircraft units across all of the game’s playable factions, including Polania, Saxony and Rusviet.

“Operation Eagle” also adds new multiplayer maps that RTS fans will love, and the new USA campaign can be played cooperatively with friends.

Iron Harvest’s “Operation Eagle” DLC will be available on GeForce NOW day-and-date. You can learn more about the update here.

Don’t Take Our Word for It

The team at Deep Silver was gracious enough to answer a few questions we had about these great updates.

Q: We’re suckers for beautifully ray-traced PC games, on a scale from 1-to-OMG, how great does Metro Exodus PC Enhanced Edition look?

A: We’re quietly confident that the Metro Exodus PC Enhanced Edition will register at the OMG end of the scale, but you don’t need to take our word for it – Digital Foundry declared that, “Metro Exodus’ PC Enhanced Edition’s Global Illumination produces without a doubt, the best lighting I’ve ever witnessed in a video game.”

Q: What does it mean for the team to leverage GeForce NOW to bring these new real-time ray-tracing updates in Metro Exodus PC Enhanced Edition to gamers across their devices?

A: We believe hardware-accelerated ray-tracing GPUs are the future, but right now the number of players with ray-tracing-capable GPUs is a small, albeit growing, percentage of the total PC audience. GeForce NOW will give those players yet to upgrade their gaming hardware a glimpse into the future.

Q: How does “Operation Eagle” build on the story in Iron Harvest? We’re excited to try this new faction.

A: The American Union of Usonia stayed out of the Great War and became an economic and military powerhouse, unnoticed by Europe’s old elites. Relying heavily on mighty “Diesel Birds,” the Usonia faction brings more variety to the Iron Harvest battlefields. Additional new buildings and new units for all factions will enhance the Iron Harvest roster to give players even more options to find the perfect attack and defence strategy.

Q: How do you see GeForce NOW expanding the audience of gamers who can play Metro Exodus and Iron Harvest?

A: We’re committed to bringing the Metro Exodus experience to as many platforms as we can without compromising on the quality of the experience; GeForce NOW puts our state-of-the-art ray-traced version of Metro Exodus into the hands of gamers regardless of their own hardware setup.

Q: Is there anything else you’d want to share with your fans who are streaming Metro Exodus and Iron Harvest from the cloud?

A: Watch out for the jump scares. You have been warned.

There’s probably a jump scare coming here, right? GeForce NOW members can find out on May 6.

GFN Thursday

In addition to rolling with a pair of Deep Silver announcements this week, members get their regular dose of GFN Thursday goodness. Read more about that and other updates this week here.

Getting excited for more ray-traced goodness in Metro Exodus? Can’t wait to get your hands on “Operation Eagle”? Let us know on Twitter or in the comments below.

The post GFN Thursday: Rolling in the Deep (Silver) with Major ‘Metro Exodus’ and ‘Iron Harvest’ Updates appeared first on The Official NVIDIA Blog.

Read More

Perceiving with Confidence: How AI Improves Radar Perception for Autonomous Vehicles

Editor’s note: This is the latest post in our NVIDIA DRIVE Labs series, which takes an engineering-focused look at individual autonomous vehicle challenges and how NVIDIA DRIVE addresses them. Catch up on all of our automotive posts, here.

Autonomous vehicles don’t just need to detect the moving traffic that surrounds them — they must also be able to tell what isn’t in motion.

At first glance, camera-based perception may seem sufficient to make these determinations. However, low lighting, inclement weather or conditions where objects are heavily occluded can affect cameras’ vision. This means diverse and redundant sensors, such as radar, must also be capable of performing this task. However, additional radar sensors that leverage only traditional processing may not be enough.

In this DRIVE Labs video, we show how AI can address the shortcomings of traditional radar signal processing in distinguishing moving and stationary objects to bolster autonomous vehicle perception.

Traditional radar processing bounces radar signals off of objects in the environment and analyzes the strength and density of reflections that come back. If a sufficiently strong and dense cluster of reflections comes back, classical radar processing can determine this is likely some kind of large object. If that cluster also happens to be moving over time, then that object is probably a car.

While this approach can work well for inferring a moving vehicle, the same may not be true for a stationary one. In this case, the object produces a dense cluster of reflections, but doesn’t move. According to classical radar processing, this means the object could be a railing, a broken down car, a highway overpass or some other object. The approach often has no way of distinguishing which.

Introducing Radar DNN

One way to overcome the limitations of this approach is with AI in the form of a deep neural network (DNN).

Specifically, we trained a DNN to detect moving and stationary objects, as well as accurately distinguish between different types of stationary obstacles, using data from radar sensors.

Training the DNN first required overcoming radar data sparsity problems. Since radar reflections can be quite sparse, it’s practically infeasible for humans to visually identify and label vehicles from radar data alone.

Figure 1. Example of propagating bounding box labels for cars from the lidar data domain into the radar data domain.

Lidar, however, can create a 3D image of surrounding objects using laser pulses. Thus, ground truth data for the DNN was created by propagating bounding box labels from the corresponding lidar dataset onto the radar data as shown in Figure 1. In this way, the ability of a human labeler to visually identify and label cars from lidar data is effectively transferred into the radar domain.

Moreover, through this process, the radar DNN not only learns to detect cars, but also their 3D shape, dimensions and orientation, which classical methods cannot easily do.

With this additional information, the radar DNN is able to distinguish between different types of obstacles — even if they’re stationary — increase confidence of true positive detections, and reduce false positive detections.

The higher confidence 3D perception results from the radar DNN in turn enables AV prediction, planning and control software to make better driving decisions, particularly in challenging scenarios. For radar, classically difficult problems like accurate shape and orientation estimation, detecting stationary vehicles as well as vehicles under highway overpasses become feasible with far fewer failures.

The radar DNN output is integrated smoothly with classical radar processing. Together, these two components form the basis of our radar obstacle perception software stack.

This stack is designed to both offer full redundancy to camera-based obstacle perception and enable radar-only input to planning and control, as well as enable fusion with camera- or lidar-perception software.

With such comprehensive radar perception capabilities, autonomous vehicles can perceive their surroundings with confidence.

To learn more about the software functionality we’re building, check out the rest of our DRIVE Labs series.

The post Perceiving with Confidence: How AI Improves Radar Perception for Autonomous Vehicles appeared first on The Official NVIDIA Blog.

Read More

Making Movie Magic, NVIDIA Powers 13 Years of Oscar-Winning Visual Effects

For the 13th year running, NVIDIA professional GPUs have powered the dazzling visuals and cinematics behind every Academy Award nominee for Best Visual Effects.

The 93rd annual Academy Awards will take place on Sunday, April 25, with five VFX nominees in the running:

  • The Midnight Sky
  • Tenet
  • Mulan
  • The One and Only Ivan
  • Love and Monsters

NVIDIA professional GPUs have been behind award-winning graphics in films for over a decade. During that time, the most stunning visual effects shots have formed the backdrop for the Best Visual Effects Oscar.

Although some traditional nominees, namely tentpole summer blockbusters, weren’t released in 2020 because of the pandemic, this year’s lineup still brought innovative tools, new techniques and impressive visuals to the big screen.

For the visuals in The Midnight Sky, Framestore delivered the breathtaking VFX and deft keyframe animation for which they are renowned. Add in cutting-edge film tech like ILM Stagecraft and Anyma, and George Clooney supervising previsualization and face replacement sequences, and it’s no wonder that Framestore swept the Visual Effects Society Awards this year.

Christopher Nolan’s latest film, Tenet, is made up of 300 VFX shots that create a sense of time inversion. During action sequences, DNEG used new temporal techniques to show time moving forward and in reverse.

In Paramount’s Love and Monsters, a sci-fi comedy about giant creatures, Toronto-based visual effects company Mr. X delivers top-notch graphics that earned them their first Oscars nomination. From colossal snails to complex crustaceans, the film featured 13 unique, mutated creatures. The VFX and animation teams crafted the creatures’ movements based on how each would interact in a post-apocalyptic world.

And to create the impressive set extensions, scenic landscapes and massive crowds in Disney’s most recent live-action film, Mulan, Weta Digital tapped NVIDIA GPU-accelerated technology to immerse the audience in a world of epic scale.

While only one visual effects team will accept an award at Sunday’s ceremony, millions of artists are creating stunning visuals and cinematics with NVIDIA RTX. Whether it’s powering virtual production sets or accelerating AI tools, RTX technology is shaping the future of storytelling.

Learn more about NVIDIA technology in media and entertainment.

Featured image courtesy of Framestore. © NETFLIX

The post Making Movie Magic, NVIDIA Powers 13 Years of Oscar-Winning Visual Effects appeared first on The Official NVIDIA Blog.

Read More

Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet

What began as a budding academic movement into farm AI projects has now blossomed into a field of startups creating agriculture technology with a positive social impact for Earth.

Whether it’s the threat to honey bees worldwide from varroa mites, devastation to citrus markets from citrus greening, or contamination of groundwater caused from agrochemicals — AI startups are enlisting NVIDIA GPUs to help solve these problems.

With Earth Day today, here’s looking at some of the work of developers, researchers and entrepreneurs who are harnessing NVIDIA GPUs to protect the planet.

The Bee’s Knees: Parasite Prevention 

Bees are under siege by varroa parasites destroying their colonies. And saving the world’s honeybee population is about a lot more than just honey. All kinds of farmers now need to rent bees because of their scarcity to get their own crops pollinated.

Beewise, a startup based in Israel, has developed robo hives with computer vision for infestation identification and treatment capabilities. In December, TIME magazine named the Beewise Beehome to its “Best Inventions of 2020” list. Others are using deep learning to understand hives better and look at improved hive designs.

Orange You Glad AI Helps

If it weren’t for AI, that glass of orange juice for breakfast might be a puckery one. A rampant “citrus greening” disease is decimating orchards and souring fruit worldwide. Thankfully, University of Florida researchers are developing computer vision for smart sprayers of agrochemicals, which are now being licensed and deployed in pilot tests by CCI, an agricultural equipment company.

The system can adjust in real time to turn off or on the application of crop protection products or fertilizers as well as adjust the amount sprayed based on the plant’s size.

SeeTree, based in Israel, is tackling citrus greening, too. It offers a GPU-driven tree analytics platform of image recognition algorithms, sensors, drones and a data collection app.

The startup uses NVIDIA Jetson TX2 to process images and CUDA as the interface for cameras at orchards. The TX2 enables it to do fruit-detection for orchards as well as provide farms with a yield estimation tool.

AI Land of Sky Blue Water

Bilberry, located in Paris, develops weed recognition powered by the NVIDIA Jetson edge AI platform for precision application of herbicides. The startup has helped customers reduce the usage of chemicals by as much as 92 percent.

FarmWise, based in San Francisco, offers farmers an AI-driven robotic machine for pulling weeds rather than spraying them, reducing groundwater contamination.

Also, John Deere-owned Blue River offers precision spraying of crops to reduce the usage of agrochemicals harmful to land and water.

And two students from India last year developed Nindamani, an AI-driven, weed-removal robot prototype that took top honors at the AI at the Edge Challenge on Hackster.io.

Milking AI for Dairy Farmers 

AI is going to the cows, too. Advanced Animal Diagnostics, based in Morrisville, North Carolina, offers a portable testing device to predict animal performance and detect infections in cattle before they take hold. Its tests are processed on NVIDIA GPUs in the cloud. The machine can help reduce usage of antibiotics.

Similarly, SomaDetect aims to improve milk production with AI. The Halifax, Nova Scotia, company runs deep learning models on NVIDIA GPUs to analyze milk images.

Photo courtesy of Mark Kelly on Unsplash

The post Cultivating AI: AgTech Industry Taps NVIDIA GPUs to Protect the Planet appeared first on The Official NVIDIA Blog.

Read More