Have a Holly, Jolly Gaming Season on GeForce NOW

Happy holidays, members.

This GFN Thursday is packed with winter sales for several games streaming on GeForce NOW, as well as seasonal in-game events. Plus, for those needing a last minute gift for a gamer in their lives, we’ve got you covered with digital gift cards for Priority memberships.

To top it all off, six new games join the GeForce NOW library this week for some festive fun.

Jingle Bells, Holiday Sales, Events Are on the Way

Whether you made the naughty or nice list this year, there are plenty of games on your wishlist on sale.

The Witcher 3: Wild Hunt GOTY on GeForce NOW
Save big on some of PC gaming’s best, including The Witcher 3: Wild Hunt GOTY.

Snag some top titles from Square Enix like Guardians of the Galaxy and Life is Strange: True Colors. Dive into great games from Deep Silver like Metro Exodus and Kingdom Come Deliverance. Experience hits from Ubisoft like Far Cry 6 and Assassin’s Creed Valhalla. Get your GOG game on playing Cyberpunk 2077 or The Witcher 3: Wild Hunt GOTY.

To catch even more of the games from the GeForce NOW library on sale this holiday season, check out the “Sales and Special Offers” row on the GeForce NOW app.

On top of these winter sales and for a limited time, get the gift of a copy of Crysis Remastered free with the purchase of a six-month Priority membership or the new GeForce NOW RTX 3080 membership. Terms and conditions apply.

World of Tanks streaming on GeForce NOW
Yes, this is really happening.

Also, keep an eye out for holiday-themed in-game events in World of Tanks and more. Get to the tank in the Schwarzenegger Campaign, where you will receive missions from Arnold himself.

With over 1,100 titles streaming on the cloud, including nearly 100 free-to-play options and more coming every week, there’s a game for everyone to enjoy this holiday.

The Perfect Gift for a Gamer

GeForce NOW Digital Gift Cards
The perfect digital stocking stuffer for your favorite gamer.

The perfect last-minute present for a gamer in your life is the gift of PC gaming on any device.

Grab a GeForce NOW Priority membership digital gift card, available in two-, six- or 12-month options. Power up your gamer’s GeForce NOW compatible devices with the kick of a full gaming rig, priority access to gaming servers, extended session lengths and RTX ON to take supported games to the next level of rendering quality.

Check out the GeForce NOW membership page for more information on priority benefits.

Gift cards can be redeemed on an existing GeForce NOW account or added to a new one. Existing Founders and Priority members will have the number of months added to their accounts.

Let it Stream, Let it Stream, Let it Stream

No matter if the weather outside is frightful, streaming games is delightful.

GFN Thursday is all about games. It also means taking games to the next level. This week, Far Cry 6 and Bright Memory: Infinite go bigger and bolder with support for RTX ON.

Farming Simulator 22 on GeForce NOW
Create your own farm and let the good times grow in Farming Simulator 22.

Plus, this GFN Thursday, enjoy six new games ready to stream from the GeForce NOW library:

Note: members who purchased the Farming Simulator 22 DLC directly from their website will not be able to access it on GeForce NOW. DLC purchased from the respective supported game store — Steam or Epic Games Store — will be playable.

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Also, keep an eye out for the free Epic Games Store titles that are being given away over the holidays. Like last year, we’ll look to add as many of these as we can when we return next year.

Whether you’re celebrating the holidays or just looking forward to a weekend full of gaming, tell us what games are bringing you joy on Twitter or in the comments below.

The post Have a Holly, Jolly Gaming Season on GeForce NOW appeared first on The Official NVIDIA Blog.

Read More

3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces

Yenifer Macias headshot
Yenifer Macias

It was memories of playing Pac-Man and Super Mario Bros while growing up in Colombia’s sprawling capital of Bogotá that inspired Yenifer Macias’s award-winning submission for the #CreateYourRetroverse contest, featured above.

The contest asked NVIDIA Omniverse users to share scenes that visualize where their love for graphics began. For Macias, that passion goes back to childhood. She loved video games — but was all the more wowed by their art.

Now, using Omniverse, a physically accurate 3D design collaboration platform that runs on NVIDIA Studio products and supports the leading 3D content creation tools of the industry, Macias accelerates her work as a 3D artist — making environments and props for video games, animation, films and advertisements.

She aimed in her #CreateYourRetroverse scene, she said, to “immerse viewers in the game world for a bit and remind them of childhood.”

The Artist’s Journey

Macias loved to doodle and always knew she’d study art.

A 3D animation course she took at a vocational institute in Bogotá confirmed her passion. She wanted to make 3D art all day, every day. Due to financial hardships, however, Macias didn’t have a home computer.

To practice her graphics skills, Macias took classes and completed internships — and with her first paycheck as a 3D artist, she bought a PC to continue her projects at home.

Over the past eight years, Macias has completed a wide range of work including visual effects, architectural drawings and freelance animations. She uses design tools like Adobe Substance Painter, Autodesk Maya and ZBrush.

From Concept to Execution

For her #CreateYourRetroverse scene, Macias started with an initial sketch:

Based on references, she then created all of the props and objects for the environment from scratch in Autodesk Maya. Next, she brought her assets into the Omniverse Create app — using an Omniverse Connector — where she fine-tuned lighting, texture and render.

“I found Omniverse’s powerful render engine to be incredible — you can make changes to the lighting and materials, seeing the results in real time,” Macias said.

Despite being a first-time user of Omniverse, Macias said the Omniverse Create app for massive 3D world building helped her finish the project on a tight timeline and “was very user friendly.”

Looking forward, she plans to use Omniverse’s real-time collaboration feature to team up on projects with artists across the world.

With Omniverse, NVIDIA Studio creators like Macias can supercharge their artistic workflows with optimized RTX-accelerated hardware and software drivers, and state-of-the-art AI and simulation features.

Explore the NVIDIA Omniverse gallery, forums and Medium channel. Check out Omniverse tutorials on Twitter and YouTube, and join our Discord server and Twitch channel to chat with the community.

The post 3D Artist Turns Hobby Into Career, Using Omniverse to Turn Sketches Into Masterpieces appeared first on The Official NVIDIA Blog.

Read More

NVIDIA BlueField Sets New World Record for DPU Performance

Data centers need extremely fast storage access, and no DPU is faster than NVIDIA’s  BlueField-2.

Recent testing by NVIDIA shows that a single BlueField-2 data processing unit reaches 41.5 million input/output operations per second (IOPS) — more than 4x more IOPS than any other DPU.

The BlueField-2 DPU delivered record-breaking performance using standard networking protocols and open-source software. It reached more than 5 million 4KB IOPS and from 7 million to over 20 million 512B IOPS for NVMe over Fabrics (NVMe-oF), a common method of accessing storage media, with TCP networking, one of the primary internet protocols.

To accelerate AI, big data and high performance computing applications, BlueField provides even higher storage performance using the popular RoCE network transport option.

In testing, BlueField supercharged performance as both an initiator and target, using different types of storage software libraries and different workloads to simulate real-world storage configurations. BlueField also supports fast storage connectivity over InfiniBand, the preferred networking architecture for many HPC and AI applications.

Testing Methodology

The 41.5 million IOPS reached by BlueField is more than 4x the previous world record of 10 million IOPS, set using proprietary storage offerings. This performance was achieved by connecting two fast Hewlett Packard Enterprise Proliant DL380 Gen 10 Plus servers, one as the application server (storage initiator) and one as the storage system (storage target).

Each server had two Intel “Ice Lake” Xeon Platinum 8380 CPUs clocked at 2.3GHz, giving 160 hyperthreaded cores per server, along with 512GB of DRAM, 120MB of L3 cache (60MB per socket) and a PCIe Gen4 bus.

To accelerate networking and NVMe-oF, each server was configured with two NVIDIA BlueField-2 P-series DPU cards, each with two 100Gb Ethernet network ports, resulting in four network ports and 400Gb/s wire bandwidth between initiator and target, connected back-to-back using NVIDIA LinkX 100GbE Direct-Attach Copper (DAC) passive cables. Both servers had Red Hat Enterprise Linux (RHEL) version 8.3.

For the storage system software, both SPDK and the standard upstream Linux kernel target were tested using both the default kernel 4.18 and one of the newest kernels, 5.15. Three different storage initiators were benchmarked: SPDK, the standard kernel storage initiator, and the FIO plugin for SPDK. Workload generation and measurements were run with FIO and SPDK. I/O sizes were tested using 4KB and 512B, which are common medium and small storage I/O sizes, respectively.

The NVMe-oF storage protocol was tested with both TCP and RoCE at the network transport layer. Each configuration was tested with 100 percent read, 100 percent write and 50/50 read/write workloads with full bidirectional network utilization.

Our testing also revealed the following performance characteristics of the BlueField DPU:

  • Testing with smaller 512B I/O sizes resulted in higher IOPS but lower-than-line-rate throughput, while 4KB I/O sizes resulted in higher throughput but lower IOPS numbers.
  • 100 percent read and 100 percent write workloads provided similar IOPS and throughput, while 50/50 mixed read/write workloads produced higher performance by using both directions of the network connection simultaneously.
  • Using SPDK resulted in higher performance than kernel-space software, but at the cost of higher server CPU utilization, which is expected behavior, since SPDK runs in user space with constant polling.
  • The newer Linux 5.15 kernel performed better than the 4.18 kernel due to storage improvements added regularly by the Linux community.

Record-Setting DPU Storage Performance Enables Storage Performance With Security

In today’s storage landscape, the vast majority of cloud and enterprise deployments require fast, distributed and networked flash storage, accessed over Ethernet or InfiniBand. Faster servers, GPUs, networks and storage media all tax server CPUs to keep up, and the best way to do so is to deploy storage-capable DPUs.

The incredible storage performance demonstrated by the BlueField-2 DPU enables higher performance and better efficiency across the data center for both application servers and storage appliances.

On top of fast storage access, BlueField also supports hardware-accelerated encryption and decryption of both Ethernet storage traffic and the storage media itself, helping protect against data theft or exfiltration.

It offloads IPsec at up to 100Gb/s (data on the wire) and 256-bit AES-XTS at up to 200Gb/s (data at rest), reducing the risk of data theft if an adversary has tapped the storage network or if the physical storage drives are stolen or sold or disposed of improperly.

Customers and leading security software vendors are using BlueField’s recently updated NVIDIA DOCA framework to run cybersecurity applications – such as a distributed firewall or security groups with micro-segmentation – on the DPU to further improve application and network security for compute servers, which reduces the risk of inappropriate access or data modifications on the storage attached to those servers.

Learn More About NVIDIA Networking Acceleration 

Learn more about NVIDIA DPUs, DOCA, RoCE, and how DPUs and DOCA enable accelerated networking and Zero-Trust Security with these links:

The post NVIDIA BlueField Sets New World Record for DPU Performance appeared first on The Official NVIDIA Blog.

Read More

How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC 

It could only happen in NVIDIA Omniverse — the company’s virtual world simulation and collaboration platform for 3D workflows.

And it happened during an interview with a virtual toy model of NVIDIA’s CEO, Jensen Huang.

“What are the greatest …” one of Toy Jensen’s creators asked, stumbling, then stopping before completing his scripted question.

Unfazed, the tiny Toy Jensen paused for a moment, considering the answer carefully.

“The greatest are those,” Toy Jensen replied, “who are kind to others.”

Leading-edge computer graphics, physics simulation, a live CEO, and a supporting cast of AI-powered avatars came together to make NVIDIA’s GTC keynote — delivered using Omniverse — possible.

Along the way, a little soul got into the mix, too.

The AI-driven comments, added to the keynote as a stinger, provided an unexpected peek at the depth of Omniverse’s technology.

“Omniverse is the hub in which all the various research domains converge and align and work in unison,” says Kevin Margo, a member of NVIDIA’s creative team who put the presentation together. “Omniverse facilitates the convergence of all of them.”

Toy Jensen’s ad-lib capped a presentation that seamlessly mixed a real CEO with virtual and real environments as Huang took viewers on a tour of how NVIDIA technologies are weaving AI, graphics and robotics together with humans in real and virtual worlds.

Real CEO, Digital Kitchen

While the CEO viewers saw was all real, the environment around him morphed as he spoke to support the story he was telling.

Viewers saw Huang deliver a keynote that seemed to begin, like so many during the global COVID pandemic, in Huang’s kitchen.

Then, with a flourish, Huang’s kitchen — modeled down to the screws holding its cabinets together — slid away from sight as Huang strolled toward a virtual recreation of Endeavor’s gleaming lobby.

“One of our goals is to find a way to elevate our keynote events,” Margo says. “We’re always looking for those special moments when we can do something novel and fantastical, and that showcase NVIDIA’s latest technological innovations.”

It was the start of a visual journey that would take Huang from that lobby to Shannon’s, a gathering spot inside Endeavor, through a holodeck, and a data center with stops inside a real robotics lab and the exterior of Endeavor.

Virtual environments such as Huang’s kitchen were created by a team using familiar tools supported by Omniverse such as Autodesk Maya and 3ds Max, and Adobe Substance Painter.  

Omniverse served to connect them all in real-time — so each team member could see changes made by colleagues using different tools simultaneously, accelerating their work.

“That was critical,” Margo says.

The virtual and the real came together quickly once live filming began.

A small on-site video team recorded Huang’s speech in just four days, starting October 30, in a spare pair of conference rooms at NVIDIA’s Silicon Valley headquarters.

Omniverse allowed NVIDIA’s team to project the dynamic virtual environments their colleagues had created on a screen behind Huang.

As a result, the light spill onto Huang changed as the scene around him changed, better integrating him into the virtual environment.

And as Huang moved through the scene, or as the camera shifted, the environment changed around Huang.

“As the camera moves, the perspective and parallax of the world on the video wall responds accordingly,” Mago says.

And because Huang could see the environment projected on the screens around him, he was better able to navigate each scene.

At the Speed of Omniverse

All of this accelerated the work of NVIDIA’s production team, which had most of what they needed in-camera after each shot rather than adding elaborate digital sets in post-production.

As a result, the video team quickly created a presentation seamlessly blending a real CEO with virtual and real-world settings.

However, Omniverse was more than just a way to speed collaboration between creatives working with real and digital elements hustling to hit a deadline. It also served as the platform that knit the string of demos featured in the keynote together.

To help developers create intelligent, interactive agents with Omniverse that can see, speak, converse on a wide range of subjects and understand naturally spoken intent, Huang announced Omniverse Avatar.

Omniverse brings together a deep stack of technologies — from ray-tracing to recommender systems — that were mixed and matched throughout the keynote to drive a series of stunning demos.

In a demo that swiftly made headlines, Huang showed how “Project Tokkio” for Omniverse Avatar connects Metropolis computer vision, Riva speech AI, avatar animation and graphics into a real-time conversational AI robot — the Toy Jensen Omniverse Avatar.

The conversation between three of NVIDIA’s engineers and a tiny toy model of Huang was more than just a technological tour de force, demonstrating expert, natural Q&A.

It showed how photorealistic modeling of Toy Jensen and his environment — right down to the glint on Toy Jensen’s glasses as he moved his head — and NVIDIA’s Riva speech synthesis technology powered by the Megatron 530B large language model could support natural, fluid conversations.

To create the demo, NVIDIA’s creative team created the digital model in Maya Substance, and Omniverse did the rest.

“None of it was manual, you just load up the animation assets and talk to it,” he said.

Huang also showed a second demo of Project Tokkio, a customer-service avatar in a restaurant kiosk that was able to see, converse with and understand two customers.

Rather than relying on Megatron, however, this model relied on a model that integrated the restaurant’s menu, allowing the avatar to smoothly guide customers through their options.

That same technology stack can help humans talk to one another, too. Huang showed Project Maxine’s ability to add state-of-the-art video and audio features to virtual collaboration and video content creation applications.

A demo showed a woman speaking English on a video call in a noisy cafe, but she can be heard clearly without background noise. As she speaks, her words are transcribed and translated in real-time into French, German and Spanish.

Thanks to Omniverse, they’re spoken by an avatar able to engage in conversation with her same voice and intonation.

These demos were all possible because Omniverse, through Omniverse Avatar, unites advanced speed AI, computer vision, natural language understanding, recommendation engines, facial animation and graphics technologies.

Omniverse Avatar’s speech recognition is based on NVIDIA Riva, a software development kit that recognizes speech across multiple languages. Riva is also used to generate human-like speech responses using text-to-speech capabilities.

Omniverse Avatar’s natural language understanding is based on the Megatron 530B large language model that can recognize, understand and generate human language.

Megatron 530B is a pretrained model that can, with little or no additional training, complete sentences, answers questions involving a large domain of subjects. It can summarize long, complex stories, translate to other languages, and handle many domains that it is not trained specifically to do.

Omniverse Avatar’s recommendation engine is provided by NVIDIA Merlin, a framework that allows businesses to build deep learning recommender systems capable of handling large amounts of data to make smarter suggestions.

Its perception capabilities are enabled by NVIDIA Metropolis, a computer vision framework for video analytics.

And its avatar animation is powered by NVIDIA Video2Face and Audio2Face, 2D and 3D AI-driven facial animation and rendering technologies.

All of these technologies are composed into an application and processed in real-time using the NVIDIA Unified Compute Framework.

Packaged as scalable, customizable microservices, the skills can be securely deployed, managed and orchestrated across multiple locations by NVIDIA Fleet Command.

Using them, Huang was able to tell a sweeping story about how NVIDIA Omniverse is changing multitrillion-dollar industries.

All of these demos were built on Omniverse. And thanks to Omniverse, everything came together — a real CEO, real and virtual environments, and a string of demos made within Omniverse as well.

Since its launch late last year, Omniverse has been downloaded over 70,000 times by designers at 500 companies. Omniverse Enterprise is now available starting at $9,000 a year.

The post How Omniverse Wove a Real CEO — and His Toy Counterpart — Together With Stunning Demos at GTC  appeared first on The Official NVIDIA Blog.

Read More

Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin

Meet the electric vehicle that’s truly future-proof.

Electric-automaker NIO took the wraps off its fifth mass-production model, the ET5, during NIO Day 2021 last week.

The mid-size sedan borrows from its luxury and performance predecessors for an intelligent vehicle that’s as agile as it is comfortable. Its AI features are powered by the NIO Adam supercomputer, built on four NVIDIA DRIVE Orin systems-on-a-chip (SoC).

In addition to centralized compute, the ET5 incorporates high-performance sensors into its sleek design, equipping it with the hardware necessary for advanced AI-assisted driving features.

The sedan also embodies the NIO concept of vehicles serving as a second living room, with a luxurious interior and immersive augmented reality digital cockpit.

These cutting-edge features are built to go the distance. The ET5 achieves more than 620 miles of range with the 150 kWh Ultralong Range Battery and a lightning-fast acceleration from zero to 60 mph in about four seconds.

A Truly Intelligent Creation

The ET5 and its older sibling, the ET7 full-size sedan, rely on a centralized, high-performance compute architecture to power AI features and continuously receive upgrades over the air.

The NIO Adam supercomputer is built on four DRIVE Orin SoCs, making it one of the most powerful platforms to run in a vehicle, achieving a total of more than 1,000 TOPS of performance.

Orin is the world’s highest-performance, most-advanced AV and robotics processor. It delivers up to 254 TOPS to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots while achieving systematic safety standards such as ISO 26262 ASIL-D.

Adam integrates the redundancy and diversity necessary for safe autonomous operation by using multiple SoCs.

The first two SoCs process the eight gigabytes of data produced by the vehicle’s sensor set every second.

The third Orin serves as a backup to ensure the system can operate safely in any situation.

While the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on user preferences.

With high-performance computing at its core, Adam is a major achievement in the creation of automotive intelligence and autonomous driving.

Going Global

After beginning deliveries in Norway earlier this year, NIO will expand worldwide in 2022.

The ET7, the first vehicle built on the DRIVE Orin-powered Adam supercomputer, will become available in March, with the ET5 following in September.

Next year, NIO vehicles will begin deliveries in the Netherlands, Sweden and Denmark.

By 2025, NIO vehicles will be in 25 countries and regions worldwide, bringing one of the most advanced AI platforms to even more customers.

With the ET5, NIO is showing no signs of slowing as it charges into the future with sleek, intelligent EVs powered by NVIDIA DRIVE.

The post Living in the Future: NIO ET5 Sedan Designed for the Autonomous Era With NVIDIA DRIVE Orin appeared first on The Official NVIDIA Blog.

Read More

Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection

Imagine picking out a brand new car — only to find a chip in the paint, rip in the seat fabric or mark in the glass.

AI can help prevent such moments of disappointment for manufacturers and potential buyers.

Mariner, an NVIDIA Metropolis partner based in Charlotte, North Carolina, offers an AI-enabled video analytics system to help manufacturers improve surface defect detection. For over 20 years, the company has worked to provide its customers with deep learning-based insights to optimize their manufacturing processes.

The vision AI platform, called Spyglass Visual Inspection, or SVI, helps manufacturers detect the defects they couldn’t see before. It’s built on the NVIDIA Metropolis intelligent video analytics framework and powered by NVIDIA GPUs.

SVI is installed in factories and used by customers like Sage Automotive Interiors to enhance their defect detection in cases where traditional, rules-based machine vision systems often pinpoint false positives.

Reducing Waste with AI

According to David Dewhirst, vice president of marketing at Mariner, up to 40 percent of annual revenue for automotive manufacturers is consumed by producing defective products.

Traditional machine vision systems installed in factories have difficulty discerning between true defects — like a stain in fabric or a chip in glass — and false positives, like lint or a water droplet that can be easily wiped away.

SVI, however, uses AI software and NVIDIA hardware connected to camera systems that provide real-time inspection of pieces on production lines, identify potential issues and determine whether they are true material defects — in just a millisecond.

This speeds up factory lines, removing the need to slow or stop the workflow to have a person inspect each potential defect. SVI results in a 20 percent increase in line speed and 30x reduction of incorrect defect classification over traditional machine vision systems.

The platform can be integrated with a factory’s existing machine vision system, giving it a boost with AI-based analysis and processing. It offers a factory an average annual savings of $2 million, Dewhirst said.

SVI uses a deep learning model that analyzes images, identifies a defect, and then labels the defects by type — which are all tasks that require powerful graphics processors.

“NVIDIA GPUs guarantee that SVI can handle almost any pixel combination and processing speed, which is why it was our choice of hardware on which to standardize our platform,” Dewhirst said.

Mariner is on track to revolutionize the defect detection process by expanding the use of its platform, which can identify defects in metal, plastic or virtually any other surface type.

Learn more about how the Spyglass system works:

The post Detect That Defect: Mariner Speeds Up Manufacturing Workflows With AI-Based Visual Inspection appeared first on The Official NVIDIA Blog.

Read More

Top 5 Edge AI Trends to Watch in 2022

2021 saw massive growth in the demand for edge computing — driven by the pandemic, the need for more efficient business processes, as well as key advances in the Internet of Things, 5G and AI.

In a study published by IBM in May, for example, 94 percent of surveyed executives said their organizations will implement edge computing in the next five years.

From smart hospitals and cities to cashierless shops to self-driving cars, edge AI — the combination of edge computing and AI — is needed more than ever.

Businesses have been slammed by logistical problems, worker shortages, inflation and uncertainty caused by the ongoing pandemic. Edge AI solutions can be used as a bridge between humans and machines, enabling improved forecasting, worker allocation, product design and logistics.

Here are the top five edge AI trends NVIDIA expects to see in 2022:

1. Edge Management Becomes an IT Focus

While edge computing is rapidly becoming a must-have for many businesses, deployments remain in the early stages.

To move to production, edge AI management will become the responsibility of IT departments. In a recent report, Gartner wrote, “Edge solutions have historically been managed by the line of business, but the responsibility is shifting to IT, and organizations are utilizing IT resources to optimize cost.”1

To address the edge computing challenges related to manageability, security and scale, IT departments will turn to cloud-native technology. Kubernetes, a platform for containerized microservices, has emerged as the leading tool for managing edge AI applications on a massive scale.

Customers with IT departments that already use Kubernetes in the cloud can transfer their experience to build their own cloud-native management solutions for the edge. More will look to purchase third-party offerings such as Red Hat OpenShift, VMware Tanzu, Wind River Cloud Platform and NVIDIA Fleet Command.

2. Expansion of AI Use Cases at the Edge

Computer vision has dominated AI deployments at the edge. Image recognition led the way in AI training, resulting in a robust ecosystem of computer vision applications.

NVIDIA Metropolis, an application framework and set of developer tools that helps create computer vision AI applications, has grown its partner network 100-fold since 2017 to now include 1,000+ members.

Many companies are deploying or purchasing computer vision applications. Such companies at the forefront of computer vision will start to look to multimodal solutions.

Multimodal AI brings in different data sources to create more intelligent applications that can respond to what they see, hear and otherwise sense. These complex AI use cases employ skills like natural language understanding, conversational AI, pose estimation, inspection and visualization.

Combined with data storage, processing technologies, and input/output or sensor capabilities, multimodal AI can yield real-time performance at the edge for an expansion of use cases in robotics, healthcare, hyper-personalized advertising, cashierless shopping, concierge experiences and more.

Imagine shopping with a virtual assistant. With traditional AI, an avatar might see what you pick up off a shelf, and a speech assistant might hear what you order.

By combining both data sources, a multimodal AI-based avatar can hear your order, provide a response, see your reaction, and provide further responses based on it. This complementary information allows the AI to deliver a better, more interactive customer experience.

To see an example of this in action, check out Project Tokkio:

3. Convergence of AI and Industrial IoT Solutions

The intelligent factory is another space being driven by new edge AI applications. According to the same Gartner report, “By 2027, machine learning in the form of deep learning will be included in over 65 percent of edge use cases, up from less than 10 percent in 2021.”

Factories can add AI applications onto cameras and other sensors for inspection and predictive maintenance. However, detection is just step one. Once an issue is detected, action must be taken.

AI applications are able to detect an anomaly or defect and then alert a human to intervene. But for safety applications and other use cases when instant action is required, real-time responses are made possible by connecting the AI inference application with the IoT platforms that manage the assembly lines, robotic arms or pick-and-place machines.

Integration between such applications relies on custom development work. Hence, expect more partnerships between AI and traditional IoT management platforms that simplify the adoption of edge AI in industrial environments.

4. Growth in Enterprise Adoption of AI-on-5G 

AI-on-5G combined computing infrastructure provides a high-performance and secure connectivity fabric to integrate sensors, computing platforms and AI applications — whether in the field, on premises or in the cloud.

Key benefits include ultra-low latency in non-wired environments, guaranteed quality-of-service and improved security.

AI-on-5G will unlock new edge AI use cases:

One of the world’s first full stack AI-on-5G platforms, Mavenir Edge AI, was introduced in November. Next year, expect to see additional full-stack solutions that provide the performance, management and scale of enterprise 5G environments.

5. AI Lifecycle Management From Cloud to Edge

For organizations deploying edge AI, MLOps will become key to helping drive the flow of data to and from the edge. Ingesting new, interesting data or insights from the edge, retraining models, testing applications and then redeploying those to the edge improves model accuracy and results.

With traditional software, updates may happen on a quarterly or annual basis, but AI gains significantly from a continuous cycle of updates.

MLOps is still in early development, with many large players and startups building solutions for the constant need for AI technology updates. While mostly focused on solving the problem of the data center for now, such solutions in the future will shift to edge computing.

Riding the Next Wave of AI Computing

Waves of AI Computing

The development of AI has consisted of several waves, as pictured above.

Democratization of AI is underway, with new tools and solutions making it a reality. Edge AI, powered by huge growth in IoT and availability of 5G, is the next wave to break.

In 2022, more enterprises will move their AI inference to the edge, bolstering ecosystem growth as the industry looks at how to extend from cloud to the edge.

Learn more about edge AI by watching the GTC session, The Rise of Intelligent Edge: From Enterprise to Device Edge, on demand.

Check out NVIDIA edge computing solutions.

1 Gartner, “Predicts 2022: The Distributed Enterprise Drives Computing to the Edge”, 20 October 2021. By analysts: Thomas Bittman, Bob Gill, Tim Zimmerman, Ted Friedman, Neil MacDonald, Karen Brown

The post Top 5 Edge AI Trends to Watch in 2022 appeared first on The Official NVIDIA Blog.

Read More

Omniverse Creator Uses AI to Make Scenes With Singing Digital Humans

The thing about inspiration is you never know where it might come from, or where it might lead.

Anderson Rohr, a 3D generalist and freelance video editor based in southern Brazil, has for more than a dozen years created content ranging from wedding videos to cinematic animation.

After seeing another creator animate a sci-fi character’s visage and voice using NVIDIA Omniverse and its AI-powered Audio2Face application, Rohr said he couldn’t help but play around with the technology.

The result is a grimly voiced, lip-synced cover of “Bad Moon Rising,” the 1960s anthem from Credence Clearwater Revival, which Rohr created using his own voice.

To make the video, Rohr used an NVIDIA Studio system with a GeForce RTX 3090 GPU.

Rohr’s Artistic Workflow

For this personal project, Rohr first recorded himself singing and opened the file in Audio2Face.

The application, built on NVIDIA AI and Omniverse technology, instantly generates expressive facial animations for digital humans with only a voice-over track or any other audio source.

Rohr then manually animated the eyes, brows and neck of his character and tweaked the lighting for the scene — all of which was rendered in Omniverse via Epic Games Unreal Engine, using an Omniverse Connector and the NVIDIA RTX Global Illumination software development kit.

“NVIDIA Omniverse is helping me achieve more natural results for my digital humans and speeding up my workflow, so that I can spend more time on the creative process,” Rohr said.

Before using Omniverse, some of Rohr’s animations took as long as 300 hours to render. He also faced software incompatibilities, which he said further slowed his work.

Now, with Omniverse and its connectors for various software applications, Rohr’s renderings are achieved in real time.

“Omniverse goes beyond my expectations,” he said. “I see myself using it a lot, and I hope my artwork inspires people to seek real-time results for virtual productions, games, cinematic scenes or any other creative project.”

With Omniverse, NVIDIA Studio creators can supercharge their artistic workflows with optimized RTX-accelerated hardware and software drivers, and state-of-the-art AI and simulation features.

Watch Rohr talk more about his work with NVIDIA Omniverse:

The post Omniverse Creator Uses AI to Make Scenes With Singing Digital Humans appeared first on The Official NVIDIA Blog.

Read More

Get the Best of Cloud Gaming With GeForce NOW RTX 3080 Memberships Available Instantly

The future of cloud gaming is available NOW, for everyone, with preorders closing and GeForce NOW RTX 3080 memberships moving to instant access. Gamers can sign up for a six-month GeForce NOW RTX 3080 membership and instantly stream the next generation of cloud gaming, starting today.

Snag the NVIDIA SHIELD TV or SHIELD TV Pro for $20 off and stream PC games to the biggest screen in the home at up to 4K HDR resolution.

Participate in a unique cloud-based DAF Drive, powered by GeForce NOW and Euro Truck Simulator 2.

And check out the four new titles joining the ever-expanding GeForce NOW library this week.

RTX 3080 Memberships Available Instantly

The next generation of cloud gaming is ready and waiting.

Make the leap to the newest generation of cloud gaming instantly. GeForce NOW RTX 3080 memberships are available today for instant access. Preorders poof, be gone!

The new tier of service transforms nearly any device into a gaming rig capable of streaming at up to 1440p resolution and 120 frames per second on PCs, native 1440p or 1600p at 120 FPS on Macs, and 4K HDR at 60 FPS on SHIELD TV, with ultra-low latency that rivals many local gaming experiences. On top of this, the membership comes with the longest gaming session length — clocking in at eight hours — as well as full control to customize in-game graphics settings, and RTX ON rendering environments in cinematic quality in supported games.

Level up your gaming experience to enjoy the GeForce NOW library of over 1,100 games with the boost of a six-month RTX 3080 membership streaming across your devices for $99.99. Founders receive 10 percent off the subscription price and can upgrade with no risk to their “Founders for Life” benefits.

For more information, check out our membership FAQ.

The Deal With SHIELD

The GeForce NOW experience goes legendary, playing in 4K HDR exclusively on the NVIDIA SHIELD — which is available with a sweet deal this holiday season.

Save $20 on SHIELD TV this holiday.
Grab a controller and stream PC gaming at up to 4K with GeForce NOW on SHIELD TV.

Just in time for the holidays, give the gift of great entertainment at a discounted price. Starting Dec. 13 in select regions, get $20 ($30 CAD, €25, £20) off SHIELD TV and SHIELD TV Pro. But hurry, this offer ends soon! And in the U.S., get six months of Peacock Premium as an added bonus, to enrich the entertainment experience.

With the new GeForce NOW RTX 3080 membership, PC gamers everywhere can stream with 4K resolution and HDR on the SHIELD TV, bringing PC gaming to the biggest screen in the house. Connect to Steam, Epic Games Store and more to play from your library, find new games or check out the 100+ free-to-play titles included with a GeForce NOW membership.

Customize play even further with your preferred gaming controller by connecting SHIELD TV with Xbox One, Series X, PlayStation DualSense or DualShock 4 and Scuf controllers and bring your gaming sessions to life with immersive 7.1 surround sound.

Roll On Into the Ride and Drive

Euro Truck Simulator 2 on GeForce NOW
Push the pedal to the metal driving the 2021 DAF XF, available in Euro Truck Simulator 2.

GeForce NOW is powering up new experiences with SCS Software by supporting a unique DAF Drive experience. It adds the New Generation DAF XF to the popular game Euro Truck Simulator 2 and gives everyone the opportunity to take a virtual test drive through a short and scenic route, streaming with GeForce NOW. Take the wheel of one of the DAF Truck vehicles, instantly, on the DAF virtual experience website.

Coming in tow is a free in-game content update to the full Euro Truck Simulator 2 game, which brings the 2021 DAF XF to players. Ride in style as you travel across Europe in the newest truck, test your skill and speed, deliver cargo and become king of the road, streaming on the cloud.

Moar Gamez Now & Later, Plz

GTFO on GeForce NOW
The only way to survive the Rundown is by working together.

Late last week a pair of games got big GeForce NOW announcements, GTFO and ARC Raiders.

GTFO is now out of early access. Jump on into this extreme cooperative horror shooter that requires stealth, strategy and teamwork to survive a deadly, underground prison.

ARC Raiders, a free-to-play cooperative third-person shooter from Embark Studios, is coming to GeForce NOW in 2022. In the game, which will be available on Steam and Epic Games Store, you and your squad of Raiders will unite to resist the onslaught of ARC – a ruthless mechanized threat descending from space.

Plus, slide on into the weekend with a pack of four new titles ready to stream from the GeForce NOW library today:

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Grab a Gift for a Gamer

Looking to spoil a gamer or yourself this holiday season?

Digital gift cards for GeForce NOW Priority memberships are available in two-, six- or 12-month options. Make your favorite player happy by powering up their GeForce NOW compatible devices with the kick of a full gaming rig, priority access to gaming servers, extended session lengths and RTX ON for supported games.

Gift cards can be redeemed on an existing GeForce NOW account or added to a new one. Existing Founders and Priority members will have the number of months added to their accounts.

As your weekend gaming session kicks off, we’ve got a question for you:

Shout at us on Twitter or in the comments below.

The post Get the Best of Cloud Gaming With GeForce NOW RTX 3080 Memberships Available Instantly appeared first on The Official NVIDIA Blog.

Read More

‘AI 2041: Ten Visions for Our Future’: AI Pioneer Kai-Fu Lee Discusses His New Work of Fiction

One of AI’s greatest champions has turned to fiction to answer the question: how will technology shape our world in the next 20 years?

Kai-Fu Lee, CEO of Sinovation Ventures and a former president of Google China, spoke with NVIDIA AI Podcast host Noah Kravitz about AI 2041: Ten Visions for Our Future. The book, his fourth available in the U.S. and first work of fiction, was in collaboration with Chinese sci-fi writer Chen Qiufan, also known as Stanley Chan.

Lee and Chan blend their expertise in scientific forecasting and speculative fiction in this collection of short stories, which was published in September.

Among Lee’s books is the New York Times bestseller AI Superpowers: China, Silicon Valley, and the New World Order, which he spoke about on a 2018 episode of the AI Podcast.

Key Points From This Episode:

  • Each of AI 2041‘s stories takes place in a different country and tackles various AI-related topics. For example, one story follows a teenage girl in Mumbai who rebels when AI gets in the way of her romantic endeavors. Another is about virtual teachers in Seoul who offer orphaned twins new ways to learn and connect.
  • Lee added written commentaries to go along with each story, covering what real technologies are used in it, how those technologies work, as well as their potential upsides and downsides.

Tweetables:

As AI still seems to be an intimidating topic for many, “I wanted to see if we could create a new, innovative piece of work that is not only accessible, but also engaging and entertaining for more people.” — Kai-Fu Lee [1:48]

“By the end of the 10th story, [readers] will have taken their first lesson in AI.” — Kai-Fu Lee [2:02]

You Might Also Like:

Investor, AI Pioneer Kai-Fu Lee on the Future of AI in the US, China

Kai-Fu Lee talks about his book, AI Superpowers: China, Silicon Valley, and the New World Order, which ranked No. 6 on the New York Times Business Books bestsellers list.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He talks about working with his wife, Andrea Frank, a professional curator of art images, to authenticate artistic masterpieces with AI’s help.

Author Cade Metz Talks About His New Book “Genius Makers”

Call it Moneyball for AI. In his book, Genius Makers, the New York Times writer Cade Metz tells the funny, inspiring — and ultimately triumphant — tale of how a dogged group of AI researchers bet their careers on the long-dismissed technology of deep learning.

Subscribe to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post ‘AI 2041: Ten Visions for Our Future’: AI Pioneer Kai-Fu Lee Discusses His New Work of Fiction appeared first on The Official NVIDIA Blog.

Read More