Imagine walking through the bustling streets of London’s Piccadilly Circus, when suddenly you’re in a tropical rainforest, surrounded by vibrant flowers and dancing butterflies.
That’s what audiences will see in the virtual world of The Green Planet AR Experience, an interactive, augmented reality experience that blends physical and digital worlds to connect people with nature.
During the Green Planet AR Experience, powered by EE 5G, visitors are led through a living rainforest and six distinct biomes by a 3D hologram of Sir David Attenborough, familiar to many as the narrator of some of the world’s most-watched nature documentaries.
Audiences engage and interact with the plant life by using a mobile device, which acts as a window into the natural world.
To bring these virtual worlds to life in a sustainable way, award-winning studio Factory 42 combined captivating storytelling with cutting-edge technology. Using NVIDIA RTX and CloudXR, the creative team elevated the AR experience and delivered high-fidelity, photorealistic virtual environments over a 5G network.
Natural, Immersive AR Over 5G — It’s a Stream Come True
The Green Planet AR Experience’s mission is to inspire, educate and motivate visitors toward positive change by showcasing how plants are vital to all life on earth. Through the project, Factory 42 and the BBC help audiences gain a deeper understanding of ecosystems, the importance of biodiversity and what it means to protect our planet.
To create an immersive environment that captured the rich, vivid colors and details of natural worlds, the Factory 42 team needed high-quality imagery and graphics power. Using mobile edge computing allowed them to deliver the interactive experience to a large number of users over EE’s private 5G network.
The AR experience runs on a custom, on-premises GPU edge-rendering stack powered by NVIDIA RTX 8000 professional GPUs. Using NVIDIA RTX, Factory 42 created ultra-high-quality 3D digital assets, environments, interactions and visual effects that made the natural elements look as realistic as possible.
With the help of U.K.-based integrator The GRID Factory, the GPU edge-rendering stack is connected to EE’s private 5G network using the latest Ericsson Industry Connect solution for a dedicated wireless cellular network. Using NVIDIA RTX Virtual Workstation (RTX vWS) on VMware Horizon, and NVIDIA’s advanced CloudXR streaming solution, Factory 42 can stream all the content from the edge of the private 5G network to the Samsung S21 mobile handsets used by each visitor.
“NVIDIA RTX vWS and CloudXR were a step ahead of the competitive products — their robustness, ability to fractionalize the GPU, and high-quality delivery of streamed XR content were key features that allowed us to create our Green Planet AR Experience as a group experience to thousands of users,” said Stephen Stewart, CTO at Factory 42.
The creative team at Factory 42 designed the content in the AR environment, which is rendered in real time with the Unity game engine. The 3D hologram of Sir David was created using volumetric capture technology provided by Dimension Studios. Spatial audio provides a surround-sound setup, which guides people through the virtual environment as digital plants and animals react to the presence of visitors in the space.
Combining these technologies, Factory42 created a new level of immersive experience — one only made possible through 5G networks.
“NVIDIA RTX and CloudXR are fundamental to our ability to deliver this 5G mobile edge compute experience,” said Stewart. “The RTX 8000 GPU provided the graphics power and the NVENC support required to deploy into an edge rendering cluster. And with CloudXR, we could create robust connections to mobile handsets.”
Sustainability was considered at every level of construction and operation. The materials used in building The Green Planet AR Experience will be reused or recycled after the event to promote circularity. And combining NVIDIA RTX and CloudXR with 5G, Factory 42 can give audiences interactive experiences with hundreds of different trees, plants and creatures inside an eco-friendly, virtual space.
Experience the Future of Streaming at GTC
Learn more about how NVIDIA is helping companies create unforgettable immersive experiences at GTC, which runs from March 21-24.
Registration is free. Sign up to hear from leading companies and professionals across industries, including Factory 42, as they share insights about the future of AR, VR and other extended reality applications.
The very thing that makes the internet so useful to so many people — the vast quantity of information that’s out there — can also make going online frustrating.
There’s so much available that the sheer volume of choices can be overwhelming. That’s where recommender systems come in, explains NVIDIA AI Podcast host Noah Kravitz.
To dig into how recommender systems work — and why these systems are being harnessed by companies in industries around the globe — Kravitz spoke to Even Oldridge, senior manager for the Merlin team at NVIDIA.
Some highlights, below. For the full conversation we, um, recommend you tune in to the podcast.
Question: So what’s a recommender system and why are they important?
Oldridge: Recommender systems are ubiquitous, and they’re a huge part of the internet and of most mobile apps, and really, most places have interaction that a person has with a computer. A recommender system, at its heart, is a system for taking the vast amount of options available in the world and boiling them down to something that’s relevant to the user in that time or in that context.
That’s a really significant challenge, both from the engineering side and the systems and the models that need to be built. Recommender systems, in my mind, are one of the most complex and significant machine learning challenges of our day. You’re trying to represent what a user like a real live human person is interested in at any given moment. And that’s not an easy thing to do, especially when that person may or may not know what they want.
Question: So broadly speaking, how would you define a recommender system?
Oldridge: A recommender system is a sort of machine learning algorithm that filters content. So you can query the recommender system to narrow down the possible options within a particular context. The classic view that most people have with recommender systems is online shopping, where you’re browsing for a particular item, and you’re seeing other items that are potentially useful in that same context or similar content. And, with sites like Netflix and Spotify and content distributors, you’re seeing content based on the content that you’ve viewed in the past. The recommender system’s role is to try and build a summary of your interests and try to come up with the next relevant thing to show you.
Question: In these different examples that you talked about, do they generally operate the same way across different shopping sites or content sites that users might go to? Or are there different ways of approaching the problem?
Oldridge: There are patterns to the problem, but it’s one of the more fragmented industries, I think. If you look at things like computer vision or natural language processing, there are open-source datasets that have allowed for a lot of significant advancements in the field and allowed for standardization and benchmarking, and those fields have become pretty standardized because of that. In the recommender system space, the interaction data that your users are generating is part of the core value of your company, so most companies are reticent to reveal that data. So there aren’t a lot of great public recommender system datasets out there.
Question: What are you doing at NVIDIA? How did NVIDIA get into the business of recommender systems? What role does your team play?
Oldridge: Why NVIDIA is interested in the business of recommender systems, is, to quote Jensen [Huang, NVIDIA’s CEO], that recommender systems are the most important algorithm on the internet, and they drive a lot of the financial and compute decisions that are being made. For NVIDIA, it’s a very interesting machine learning workload that I think previously has been more applicable to the CPU or has been done more on the CPU.
We’ve gotten to a place where recommender systems on GPUs make a ton of sense. There aren’t many people who are trying to run large natural language processing models or large computer vision models on a CPU. For recsys, there’s a lot of people still focused on the CPU-based solutions. There’s a strong motivation for us to get this right because we have a vested interest in selling GPUs, but beyond that, there’s a similar degree of acceleration that’s possible that led to the revolutions that have happened in computer vision and NLP. When things can happen 10 times faster, then you’re able to do much more exploration and much more diving into the problem space. The field begins to take off in a way that it hasn’t before, and that’s something that our team is really focused on: How we can enable these teams to develop recommender systems much more quickly and efficiently, both from the compute time perspective and making sure that you can develop features and train models and deploy to production really quickly and easily.
Question: How do you judge the effectiveness of a recommender system?
Oldridge: There’s a wide variety of factors that are used to determine and compare the effectiveness, both offline when you’re developing the model and trying to evaluate its performance, and online when you’re running the model in production and the model serving the customer. There’s a lot of metrics in that space. I don’t think there’s any one clear answer of what you need to measure. I think a lot of different companies are, at the heart of it, trying to measure longer-term user engagement, but that’s a very lagging signal. So you need to tie it to metrics that are much more immediate, like interaction and clicks on items, etc.
Question: Could one way to judge it potentially be something like the number of Amazon purchases a user ends up returning?
Oldridge: I think many companies will track those and other forms of user engagement, which can be both positive and negative. Those cost the company a lot, so they’re probably being weighed. It’s very difficult to trace those all the way back to an individual recommendation right at the start. It becomes one of the most interesting and complex challenges — at its heart, a recommender system is trying to model your preference. The preference of a human being in the world is based on their myriad of contexts. That context can change in ways that the model has no idea about. For example, on this podcast, you could tell me about an interesting book that you’ve read and that will lead me to look it up on Amazon and potentially order it. That recommendation from you as a human is you using your context that you have about me and about our conversation and about a bunch of other factors. The system doesn’t know that conversation has happened, so it doesn’t know that that’s kind of something that’s of particular relevance to me.
Question: Tell us about your background?
Oldridge: I did a Ph.D. in computer vision from the University of British Columbia here in Vancouver where I live, and that was pre deep learning. Everything that I did in my Ph.D. could be summarized in probably five lines of TensorFlow at this point.
I went from that role to a job at Plenty of Fish, an online dating site. There were about 30 people when I joined, and the founder had written all of the algorithms that were doing the recommendations. I was the first data science hire and built that up. It was recommending humans to humans for online dating. It’s a very interesting space; it’s a funny one in the sense that the users are the items, and it’s reciprocal. It was a very interesting place to be — you leave work on a Friday night and head home, and there were probably 50,000 or 100,000 people out on a date because of an algorithm. It’s very strange to think about the number of potential new humans that are in the world — marriages, whatever else that just happened — because of these algorithms.
It was interesting, and the data was driving it all. It was my first foray into recommender systems. Then, I left Plenty of Fish and went into deep learning and fast AI. I took six months off when I left Plenty of Fish after it was sold to Match Group, and I spent time really getting into deep learning, which I hadn’t spent any time on, and I got deep into that through the fast AI course.
Question: You mentioned that NVIDIA has some tools available to make it easier for smaller organizations. For anybody who wants to build a recommender system, do you want to speak to any of the specific things that are out there? Or maybe tools that you’re working on with the Merlin team that folks can use?
Oldridge: The Merlin team consists largely of people like myself, who’ve built recommender systems in production in the past and understand the pain of it. It’s really hard to build a recommender system.
We’re working on three main premises:
Make it work: We want to have a framework that provides end-to-end recommendations, so you can complete all the different stages and all the things you need.
Make it easy: It should be straightforward to be able to do things that are commonly done within the space. We’re really thinking about issues such as, “Where was a pain point in our past where it was a real challenge to use the existing tooling? And how can we smooth that pain point over?”
Make it fast: At NVIDIA, we want to make sure that this is performance, at scale, and how these things scale is an incredibly important part of the problem space.
Question: Where do you see the space headed over the next couple of years?
Oldridge: What we’re hoping to do with Merlin is provide a standard set of tools and a standard framework that everyone can use and think about to be able to do accelerated recommender systems. Especially as we accelerate things by a 10x factor or more as it changes the pattern.
One of my favorite diagrams that I saw since joining NVIDIA was the diagram of the developer, who, at the start of their day, gets a coffee and then starts [a data science project] and then goes to get another coffee because that takes so long and kind of just this back and forth, you know, drinking six or 10 cups. And, getting to the point where you know, they can do basically three things in a day.
It hits home because that was me a couple years ago, and I was personally facing that. It was so frustrating because I wanted to get stuff done, but because of that lagging signal, you’re not nearly as effective when you get back to it to try and dig in. Seeing the difference that it makes when it gets to that 1-to-3-minute cycle where you’re running something and getting the results and running something and getting the results and you get into that flow pattern, you’re really able to explore things quickly. Then you get to the point where you’re iterating so quickly, and then you can start leveraging the parallelization that happens on the GPU and begin to scale things up.
Question: For the folks who want to find out more about the work you’re doing and what’s going on with Merlin, where can they go online to learn more or dig a little deeper into some white papers and some of the more technical aspects of the work?
Oldridge: A great starting point is our GitHub. We linked out to a bunch of our papers there. I have a bunch of talks that I’ve given about Merlin across various sources. If you search for my name in YouTube, or for Merlin recommender systems, there’s a lot of information you can find out there.
Subscribe to the AI Podcast: Now Available on Amazon Music
GauGAN, an AI demo for photorealistic image generation, allows anyone to create stunning landscapes using generative adversarial networks. Named after post-Impressionist painter Paul Gauguin, it was created by NVIDIA Research and can be experienced free through NVIDIA AI Demos.
How to Create With GauGAN
The latest version of the demo, GauGAN2, turns any combination of words and drawings into a lifelike image. Users can simply type a phrase like “lake in front of mountain” and press a button to generate a scene in real time. By tweaking the text to a “lake in front of snowy mountain” or “forest in front of mountain,” the AI model instantly modifies the image.
Artists who prefer to draw a scene themselves can use the demo’s smart paintbrush to modify these text-prompted scenes or start from scratch, drawing in boulders, trees or fluffy clouds. Clicking on a filter (or uploading a custom image) allows users to experiment with different lighting or apply a specific painting style to their creations.
AI Behind the GauGAN2 Demo
At the heart of GauGAN2 are generative adversarial networks, or GANs — a kind of deep learning model that involves a pair of neural networks: a generator and a discriminator. The generator creates synthetic images. The discriminator, trained on millions of real landscape images, gives the generator network pixel-by-pixel feedback on how to make the synthetic images more realistic.
Over time, the GAN model learns to create convincing imitations of the real world, with mountains reflected in AI-generated lakes and trees losing their leaves when a scene is modified with the word “winter.”
When users draw their own doodle or modify an existing scene in the GauGAN2 demo, they’re working with segmentation maps — high-level outlines that record the location of objects in a scene. Individual areas are labeled with features like sand, river, grass or flower, giving the AI model instructions on how to fill in the scene.
GauGAN has been wildly popular since it debuted at NVIDIA GTC in 2019 — it’s been used by art teachers in schools, in museums as an interactive art exhibit and by millions online.
Art directors and concept artists from top film studios and video game companies had been among the creative professionals interested in GauGAN as a tool to prototype ideas for their work. So NVIDIA Studio, a platform to assist creators, came out with a desktop application: NVIDIA Canvas.
NVIDIA Canvas brings the technology behind GauGAN to professionals in a format compatible with existing tools like Adobe Photoshop, and lets artists use NVIDIA RTX GPUs for a more fluid, interactive experience.
Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.
Within the Mogao Caves, a cultural crossroads along what was the Silk Road in northwestern China, lies a natural reserve of tens of thousands of historical documents, paintings and statues of the Buddha.
And nearly 2,000 miles away, in eastern China, 3D artist Ting Song has brought one of these statues to life — with the help of NVIDIA Omniverse, a physically accurate 3D design collaboration platform available with RTX-powered GPUs and part of the NVIDIA Studio suite for creators.
The Forbes 30 under 30 artist explores the concept of fine art in the digital era, blending AI with traditional art, poetry and drama.
Song, who divides her time between Beijing and Shanghai, created the first digital art piece that was auctioned by traditional art houses across China — a work called “Peony Dream,” inspired by the classic Chinese play The Peony Pavilion.
She uses Adobe After Effects and Photoshop, Blender, and Unity software with Omniverse to vivify her work.
Accelerating Art-ificial Intelligence
An avid hackathon-goer growing up, Song has shared her love of cutting-edge, open-source technology by hosting hackathons in more than a dozen countries.
She saw a multitude of groundbreaking uses for technology at these events — and was particularly spurred to use AI as a tool to foster art and creativity.
Her recent works of AI-based, immersive, multidimensional art focus on portraying philosophical and aesthetic themes from traditional Chinese culture.
For her piece that reimagines the Buddha statue, Song used Adobe software to create its layers and NVIDIA StyleGAN2 to synthesize the colors of the murals in the Mogao Caves — before bringing it into Omniverse to “let it dance,” she said.
“My work aims to give traditional art forms new life, as many existing cultural creations don’t yet exist in a 3D world, only 2D,” Song said. “NVIDIA Omniverse apps like Kaolin and Audio2Face, and NVIDIA DIB-R models support artists who are switching from traditional creations to owning new experiences in virtual worlds.”
Song uses Kaolin — her favorite Omniverse app — to inspect 3D datasets, visualize 3D outputs of a model and render synthetic datasets. Song imported models and animations from Blender and Unity into Omniverse.
And with Omniverse Audio2Face, an app that quickly generates expressive facial animation from just an audio source, Song animated a virtual poet character that she plans to integrate with her “Peony Dream” piece.
In Song’s following demo, a digital human recites a Chinese poem written by AI: “Spring is still lingering when swallows come / Strings of rain and slanting wind / Which trees are kissed upon / Stringed instruments flourish in the bloom of youth / The sun shines, and the lyric flows.”
“Digging into our true humanistic power by designing an artistic concept based on a play or poem — and then productizing it using the proper technological tools — is all enabled by Omniverse,” Song said.
In addition to revitalizing traditional works, Song often writes her own poems or scripts off of which she bases stunning visual representations made in Omniverse.
The rapid iteration and collaboration capabilities of the open-source Omniverse ecosystem and the power of NVIDIA RTX technology — which save her months’ worth of model training time — provide Song with “inspiration and technical confidence” for her artistic endeavors, she said.
“I hope my work inspires people to dive deeper into their traditional cultural heritage — and encourages them to use AI as a tool to help reveal the unique creative talents they have as human beings,” Song said.
Learn More at GTC
Song’s work will go on display in the AI Art Gallery and AI Playground at GTC, which runs March 21-24. The virtual conference is free to attend and will have dozens of sessions and special events featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity, Walt Disney Studios and more.
Creatives will also have the opportunity to connect with one another and get a behind-the-scenes look at the Omniverse roadmap in the NVIDIA Omniverse User Group and Developer Days.
With more than 11,000 stores across Thailand serving millions of customers, CP All, the country’s sole licensed operator of 7-Eleven convenience stores, recently turned to AI to dial up its call centers’ service capabilities.
Built on the NVIDIA conversational AI platform, the Bangkok-based company’s customer service bots help call-center agents answer frequently asked questions and track customer orders. The bots understand and speak Thai with 97 percent accuracy, according to Areoll Wu, deputy general manager of CP All.
This kind of innovation is a key value for CP All, which partners with several industry groups and national agencies on an annual awards program to encourage research and entrepreneurship in Thailand.
CP All’s 7-Eleven call centers manage customer inquiries in many business domains — including e-commerce, finance and retail — which each have area-specific expert representatives. The centers typically get nearly 250,000 calls a day, according to Kritima Klomnoi, project manager at Gosoft, a subsidiary of CP All.
“Reducing hold time for customers is a key measure of our service performance,” Klomnoi said. “NVIDIA technologies offer us a 60 percent reduction in the call load that human agents must handle, allowing employees to efficiently tackle more unique and complex problems raised by customers.”
Using AI-driven automatic speech recognition services, CP All’s customer phone calls are transcribed in real time. When a customer service bot recognizes a question based on the NVIDIA-powered intelligent FAQ system, it immediately provides an answer using text-to-speech technologies.
Otherwise, the AI quickly analyzes and routes calls to the appropriate employee who can assist in resolving the query in its specific business domain. CP All has also automated all e-commerce order-tracking inquiries using AI.
Adapting to the Thai Language
When first exploring conversational AI, the CP All team faced the challenge of getting the model to recognize the nuances of the Thai, Wu said.
Standard Thai uses 21 consonants, 18 pure vowel sounds, three diphthongs and five tones — making it a complex language. NVIDIA NeMo — a framework for building, training and fine-tuning GPU-accelerated speech and natural language understanding models — helped CP All work through the intricacies.
“The toolkit’s pretrained models and tools made the process of deploying our service much less daunting,” said Wu. “With the help of NeMo, we were able to quickly build and improve our AI language models, which are now optimized to understand and speak the unique Thai language.”
According to Wu, the NeMo framework enabled a 97 percent accuracy in CP All’s Thai language models, more than tenfold the accuracy achieved previously.
Looking forward, CP All plans to expand its AI services to more business domains and scale to millions of concurrent sessions on NVIDIA GPU inference architecture.
This is, without a doubt, the best time to jump into cloud gaming.
GeForce NOW RTX 3080 memberships deliver up to 1440p resolution at 120 frames per second on PC, 1600p and 120 FPS on Mac, and 4K HDR at 60 FPS on NVIDIA SHIELD TV, with ultra-low latency that rivals many local gaming experiences.
All RTX 3080 members will experience benefits of the new service level by default — reduced latency, longer session lengths, smoother streams, dedicated access to a high-performance cloud gaming rig — and there are additional ways to make the most of your membership.
Understanding Resolution and FPS
Today’s PC gaming visuals are nothing short of extraordinary. Advancements in ray tracing simulate lighting and shadows to create stunning, photographic scenes, resulting in realism and deeper gaming immersion.
Resolution is the size of the image, measured in pixels. A pixel is the smallest physical point on a display, the building block of any onscreen visual. A larger number of pixels, or “higher resolution,” delivers sharper details and visuals that can accommodate a wider variety of colors, leading to stunning graphics.
Standard HD monitors are 1080p resolution, 1920 pixels wide by 1080 pixels tall. Displays with 1440p, aka 2K screens, are 2560 x 1440 and contain 4x more pixels than HD for incredible graphical fidelity. Some newer Macbooks have 1600p resolution displays — a 2560 × 1600 pixel count.
FPS measures the number of times an image is rendered or redrawn per second onscreen by the graphics card.
Refreshes must be extremely quick to represent fluid, smooth movement. Key frame rates include 30, 60 and 120 FPS. These thresholds are leaps in performance that have matched new generations of displays.
How come every game can’t play at 4K resolution and 120 FPS? Simply put, there are trade-offs.
GPUs and CPUs working in tandem perform a variety of tasks, such as rendering graphics, particle effects like explosions, and visual effects, all of which become exponentially harder when graphical settings are maximized.
Most game developers prioritize delivering buttery-smooth graphics by perfecting frame rate. From there, they increase resolution and FPS for the best possible visual experience.
With a handful of hardware checks, you can unlock the maximum resolution and FPS with a GeForce NOW RTX 3080 membership, beginning with the display.
Setup the Display to Unlock 1440p (or 1600p) at 120 FPS
Start by maximizing the display resolution.
On most Windows PCs, click the Start button, Control Panel, and then, under Appearance and Personalization, select Adjust screen resolution. Then, click the drop-down list next to Resolution, move the slider to the highest resolution and click Apply. On Mac, choose the Apple menu, then System Preferences > Displays > Scaled, then choose the highest resolution.
Next, unlock a maximum of 120 FPS.
While some games are capable of 120 FPS, the display needs the capability to refresh just as fast. This is measured in Hertz. The higher the display Hertz, or Hz, the smoother and more responsive gameplay feels.
Some displays have a refresh rate higher than 120Hz. On these displays, members will still stream at up to 120 FPS. But the higher refresh rate helps lower click-to-pixel latency, which refers to the amount of time it takes from a physical action, like pressing a controller button in a soccer game, to when it’s reflected on the screen, like the player attempting a shot.
Lower click-to-pixel latency adds responsiveness to fast-paced games and is especially critical for competitive gamers, where milliseconds can be the fine margin separating victory and defeat.
Members have the option to play at 120 FPS on a laptop with a 120Hz display, such as the newly announced MacBook Pros, or connect to a compatible 120Hz+ display, such as NVIDIA G-SYNC monitors.
To change the refresh rate on PC, click the Start button, then Settings > System > Display > Advanced display settings, selecting Refresh rate and the desired Hertz. On Mac, select the Apple menu, access System Preferences, click Displays, navigate the Refresh Rate pop-up menu and choose the requisite Hertz. For further details, visit Windows Central or Apple Support.
To connect to a 120Hz+ display, check the laptop for a compatible video port.
One of the following will work for PC: USB-C (DisplayPort or Thunderbolt), HDMI (1.4 for 1080p, or HDMI 2.0 or later for 1440p) DisplayPort 1.2 and Mini DisplayPort 1.2.
On Mac, look for a USB-C, Thunderbolt (1/2/3), HDMI (1.4 for 1080p, or HDMI 2.0 or later for 1440p), Mini DisplayPort 1.2 or USB 4 port.
Next, look for a compatible port on the monitor. Any of the above will do.
Identifying ports can be tricky, but Digital Trends and Apple have useful articles for PC and Mac, respectively.
To finish, simply acquire the necessary cable and connect devices by plugging them in.
Combinations of cables and connections will work, like HDMI to HDMI or USB-C to DisplayPort. However, performance may vary slightly.
According to testing, the optimal connection to maximize graphics performance on laptops is USB-C to DisplayPort. With Mac, a USB-C (Thunderbolt) adaptor to Thunderbolt to DisplayPort connection works best.
Some laptops can connect with a simple docking station or a hub, but double check as not all can output maximum resolution and FPS.
For a complete guide of compatible connectors, read our support articles for Windows and macOS.
Maximize Streaming Settings on PC and Mac
With hardware sorted out, adjust streaming settings.
GeForce NOW has a convenient built-in network test that automatically detects the best streaming settings, but only up to 1080p at 60 FPS. This happens in the cloud, so gamers get a great experience and can jump in to play immediately. If the test determines an internet connection can’t maintain resolution, GeForce NOW may select a lower resolution.
For maximum resolution, open the GeForce NOW app, go to Settings, Streaming Quality and select Custom. This will open the Details drop-down menu where Resolution can be adjusted to match the maximum size — 1440p for PC and iMacs, and 1600p on select Macbooks.
Cranking FPS up to 120 requires slight modifications.
Open GeForce NOW, go to Settings, Streaming Quality and select Custom mode.
Change Max bit rate to Auto (our recommendation) or select the desired value.
Set Frame Rate to 120 FPS or higher. If the display can output 144Hz, set to 144 FPS.
Select VSync and choose Adaptive for good latency and smooth streaming. Turning Vsync off may lower latency even further, but may cause video tearing during gameplay.
Game settings in GeForce NOW are automatically optimized for 1440p and 120 FPS in our most popular games. Changes shouldn’t be needed to get the best in-game experience.
Feel free to change settings as needed. To save custom in-game graphics, open Settings and turn on IN-GAME GRAPHICS SETTINGS. Otherwise, graphics will revert back to recommended settings.
Maximizing display settings will differ depending on the platform.
Streaming on SHIELD TV With 4K HDR
GeForce NOW members can play PC games with 4K resolution and HDR at 60 FPS in exceptional 5.1 or 7.1 surround sound on SHIELD TV.
Setup is quick and easy. On a TV that is 4K-HDR-compatible, access the menu, click Settings > Stream Quality and run a network test.
GeForce NOW optimizes graphics settings for the most popular games to stream 4K HDR at 60 FPS on SHIELD. No changes to settings are needed. Similar to PC and Mac, settings can be saved with the Save my changes feature.
To play HDR-compatible games, look for the HDR badge. The GeForce NOW team configures HDR games individually, and will continue to onboard new titles in the coming months.
GeForce NOW provides gamers the freedom to save games in the cloud, then pick up and play on the go, retaining a true PC experience.
GeForce NOW at 120 FPS on Android Mobile Devices
GeForce NOW RTX 3080 memberships stream at up to 120 FPS on select Android devices.
Supported phones include the Google Pixel 6 Pro and Samsung S20 FE EG, S21, S21+, S21 Ultra and Note20 Ultra 5G, with plans to add new phones and tablets over time.
Not all phones enable 120Hz by default. It’s the maximum performance, which some users don’t need, and can reduce battery life.
With game settings adjusted, it’s game time, hoop!
With Great Internet Comes Ultra-Low Latency
As a cloud gaming service, GeForce NOW benefits from a strong internet connection.
PC and Mac need at least 35mbps for streaming up to 1440p or 1600p at 120 FPS. SHIELD TV requires 40mbps for 4K HDR at 60 FPS. And Android requires 15mbps for 720p at 120 FPS, or 25mbps for 1080p at 120 FPS.
We strongly recommend a hardwired Ethernet connection. A 5GHz WiFi connection also provides a great gaming experience.
Next, run the GeForce NOW in-app network test to measure latency by opening GeForce NOW, navigating to Server Location and selecting Test Network.
Internet service provider and proximity to the closest server are two variables that can impact the overall experience — and are likely out of most members’ control. But there are ways to improve the home network setup for a top-tier cloud gaming experience.
Interested in RTX 3080 in the cloud? Be sure to check the regional availability website to confirm memberships are available in your country and determine if the server closest to you has been upgraded to the new GeForce NOW RTX 3080 cloud gaming rigs.
If not, run a network test to the closest RTX 3080-enabled server to determine if the ping time is acceptable.
Now that you’ve mastered the basics, it’s time to give the cloud everything you’ve got. Ready? Begin.
Follow GeForce NOW on Facebook and Twitter, and check out the GeForce NOW blog every GFN Thursday, to stay up to date on the latest features and game launches.
Guinness World Records this week presented a Stanford University-led research team with the first record for fastest DNA sequencing technique — a benchmark set using a workflow sped up by AI and accelerated computing.
Achieved in five hours and two minutes, the DNA sequencing record can allow clinicians to take a blood draw from a critical-care patient and reach a genetic disorder diagnosis the same day. The recognition was awarded by a Guinness World Records adjudicator Wednesday at Stanford University’s Jen-Hsun Huang Engineering Center, named for NVIDIA’s founder and CEO, a Stanford alumnus.
The landmark study behind the world record was led by Dr. Euan Ashley, professor of medicine, of genetics and of biomedical data science at the Stanford School of Medicine. Collaborators include researchers from Stanford, NVIDIA, Oxford Nanopore Technologies, Google, Baylor College of Medicine and the University of California at Santa Cruz.
“I think we are in unanimous agreement that this is nothing short of a miracle,” said Kimberly Powell, vice president of healthcare at NVIDIA, at the event. “This is an achievement that did go down in the history books, and will inspire another five and 10 years of fantastic work in the digital biology revolution, in which genomics is driving at the forefront.”
Diagnosing With a Genome in Record Time
The researchers achieved the record speed by optimizing every stage of the sequencing workflow. They used high-throughput nanopore sequencing on Oxford Nanopore’s PromethION Flow Cells to generate more than 100 gigabases of data per hour, and accelerated base calling and variant calling using NVIDIA GPUs on Google Cloud. A gigabase is one billion nucleotides.
“These innovations don’t come from one individual, or even one team,” said Greg Corrado, distinguished scientist at Google Research, at the event. “It really takes this group of people coming together to solve these problems.”
To accelerate every step — from Oxford Nanopore’s AI base calling to variant calling, where scientists identify the millions of variants in a genome — the researchers relied on the NVIDIA Clara Parabricks computational genomics application framework. They used a GPU-accelerated version of PEPPER-Margin-DeepVariant, a pipeline developed by Google and UC Santa Cruz’s Computational Genomics Laboratory.
“I believe that the innovations that we’ll see in biology and medicine in the coming century are going to depend on this kind of collaboration much more than the siloed R&D centers of the past,” Corrado said.
New Possibilities for Patient Care
Ultra-rapid genome sequencing isn’t about setting world records. Cutting down the turnaround for a genetic diagnosis from a couple weeks to just a few hours can provide doctors with rapid answers needed to treat critical care patients, where every second counts.
And, as the technology becomes more accessible, more hospitals and research centers will be able to use whole genome sequencing as a critical tool for patient care.
“Genomics is still at the beginning — it’s not the standard of care,” said Powell. “I believe we can help make it part of the standard by reducing the cost and the complexity and democratizing it.”
Not content with the five-hour record, the team is already exploring ways to decrease the DNA sequencing time even further.
“There’s one promise we will make. We will smash this record very quickly in collaboration with Euan and his team, and NVIDIA and Google,” said Gordon Sanghera, CEO of Oxford Nanopore Technologies.
Don’t miss the chance to experience the breakthroughs that are driving the future of autonomy.
NVIDIA GTC will bring together the leaders, researchers and developers who are ushering in the era of autonomous vehicles. The virtual conference, running March 21-24, also features experts from industries transformed by AI, such as healthcare, robotics and finance.
And it’s all free to attend.
The conference features a brilliant display of the latest in AI development with the opening keynote on March 22, delivered by NVIDIA CEO and founder Jensen Huang.
The whole week is packed with more than 900 sessions covering autonomous vehicles, AI, supercomputing and more. Conference-goers also have the opportunity to network and learn from in-house experts on the latest in AI and self-driving development.
Here’s a sneak peek of what to expect at GTC next month:
Learn From Luminaries
It seems that nearly every week there’s a new development in the field of AI, but how do these breakthroughs translate to autonomous vehicles?
Hear how industry leaders are harnessing the latest AI innovations to accelerate intelligent transportation — from global automakers and suppliers to startups and researchers.
Automotive session highlights include:
Stefan Sicklinger, head of Big Loop and advanced systems division at CARIAD/VW Group, covers the process of leveraging fleet data to develop and improve autonomous driving software at scale.
Magnus Östberg, chief software officer of Mercedes-Benz, discusses how the premium automaker is creating software-defined features for the next generation of luxury.
Xiaodi Hou, co-founder and CTO of TuSimple, walks through the autonomous trucking startup’s approach to achieving level 4 autonomy.
Raquel Urtasun, CEO of Waabi, details how the startup, which recently emerged from stealth, takes an AI-first approach to autonomous driving development.
Michael Keckeisen, director of ProAI at ZF Group, outlines the role of supercomputers in developing and deploying safer, more efficient transportation.
Attendees also have the opportunity to get the inside scoop on the latest NVIDIA DRIVE solutions directly from the minds behind them.
NVIDIA DRIVE Developer Days is a series of deep-dive sessions on building safe and robust autonomous vehicles. These sessions are led by the NVIDIA engineering team, which will highlight the newest DRIVE features and discuss how to apply them to autonomous vehicle development.
This virtual content is available to all GTC attendees — register for free today and seize the opportunity to get a firsthand look at the autonomous future.
Real-time rendering and photorealistic graphics used to be tall tales, but NVIDIA Omniverse has made them fact from fiction.
NVIDIA’s own artists are writing new chapters in Omniverse, an accelerated 3D design platform that connects and enhances 3D apps and creative workflows, to showcase these stories.
Combined with the NVIDIA Studio platform, Omniverse and Studio-validated hardware enable creatives to push the limits of their imagination and design rich, captivating virtual worlds like never before.
One of the latest projects in Omniverse, The Storyteller showcases a stunning retro-style writer’s room filled with leather-bound books, metallic typewriters and wooden furniture. Artists from NVIDIA used Omniverse, Autodesk 3ds Max and Substance 3D Painter to capture the essence of the room, creating detailed 3D models with realistic lighting.
Just for Reference — It Begins With Images
To kick off the project, lead environment artist Andrew Averkin looked at various interior images to use as references for the scene. From retro furniture and toys to vintage record players and sturdy bookshelves, these images were used as guidance and inspiration throughout the creative process.
The team of artists also collected various 3D models to create the assets that would populate and bring mood and atmosphere to the scene.
For one key element, the writer’s table, the team added extra details, such as texturing done in Substance 3D Painter, to create more layers of realism.
3D Assets, Assemble!
Once the 3D assets were completed, Averkin used Autodesk 3ds Max to assemble the scene connected to Omniverse Create, a scene composition app that can handle complex 3D scenes and objects.
With Autodesk 3ds Max connected to Create, Averkin had a much more iterative workflow — he was able to place 3D models in the scene, make changes to them on the spot, and continue assembling the scene until he achieved the look and feel he wanted.
“The best part was that I used all the tools in Autodesk 3ds Max to quickly assemble the scene. And with Omniverse Create, I used path-traced render mode to get high-quality, photorealistic renders of the scene in real time,” said Averkin. “I also used Assembly Tool, which is a set of tools that allowed me to work with the 3D models in a more efficient way — from scattering objects to painting surfaces.”
Averkin used the Autodesk 3ds Max Omniverse Connector — a plug-in that enables users to quickly and easily convert 3D content to Universal Scene Description, or USD — to export the scene from Autodesk 3ds Max to Omniverse Create. This made it easier to sync his work from one app to another, and continue working on the project inside Omniverse.
A Story Rendered Complete
To put the final touches on the Storyteller project, the artists worked with the simple-to-use tools in Omniverse Create to add realistic, ambient lighting and shadows.
“I wanted the lighting to look like the kind you see after the rain, or on a cloudy day,” said Averkin. “I also used rectangular light behind the window, so it could brighten the indoor part of the room and provide some nice shadows.”
To stage the composition, the team placed 30 or so cameras around the room to capture its different angles and perspectives, so viewers could be immersed in the scene.
For the final render of The Storyteller, the artists used Omniverse RTX Renderer in path-traced mode to get the most realistic result.
Some shots were rendered on an NVIDIA Studio system powered by two NVIDIA RTX A6000 GPUs. The team also used Omniverse Farm — a system layer that lets users create their own render farm — to accelerate the rendering process and achieve the final design significantly faster.
Watch the final cut of The Storyteller, and learn more about Omniverse at GTC, taking place on March 21-24.
Countless analysts and businesses are talking about and implementing edge computing, which traces its origins to the 1990s, when content delivery networks were created to serve web and video content from edge servers deployed close to users.
Today, almost every business has job functions that can benefit from the adoption of edge AI. In fact, edge applications are driving the next wave of AI in ways that improve our lives at home, at work, in school and in transit.
Learn more about what edge AI is, its benefits and how it works, examples of edge AI use cases, and the relationship between edge computing and cloud computing.
What Is Edge AI?
Edge AI is the deployment of AI applications in devices throughout the physical world. It’s called “edge AI” because the AI computation is done near the user at the edge of the network, close to where the data is located, rather than centrally in a cloud computing facility or private data center.
Since the internet has global reach, the edge of the network can connote any location. It can be a retail store, factory, hospital or devices all around us, like traffic lights, autonomous machines and phones.
Edge AI: Why Now?
Organizations from every industry are looking to increase automation to improve processes, efficiency and safety.
To help them, computer programs need to recognize patterns and execute tasks repeatedly and safely. But the world is unstructured and the range of tasks that humans perform covers infinite circumstances that are impossible to fully describe in programs and rules.
Advances in edge AI have opened opportunities for machines and devices, wherever they may be, to operate with the “intelligence” of human cognition. AI-enabled smart applications learn to perform similar tasks under different circumstances, much like real life.
The efficacy of deploying AI models at the edge arises from three recent innovations.
Maturation of neural networks: Neural networks and related AI infrastructure have finally developed to the point of allowing for generalized machine learning. Organizations are learning how to successfully train AI models and deploy them in production at the edge.
Advances in compute infrastructure: Powerful distributed computational power is required to run AI at the edge. Recent advances in highly parallel GPUs have been adapted to execute neural networks.
Adoption of IoT devices: The widespread adoption of the Internet of Things has fueled the explosion of big data. With the sudden ability to collect data in every aspect of a business — from industrial sensors, smart cameras, robots and more — we now have the data and devices necessary to deploy AI models at the edge. Moreover, 5G is providing IoT a boost with faster, more stable and secure connectivity.
Why Deploy AI at the Edge? What Are the Benefits of Edge AI?
Since AI algorithms are capable of understanding language, sights, sounds, smells, temperature, faces and other analog forms of unstructured information, they’re particularly useful in places occupied by end users with real-world problems. These AI applications would be impractical or even impossible to deploy in a centralized cloud or enterprise data center due to issues related to latency, bandwidth and privacy.
The benefits of edge AI include:
Intelligence: AI applications are more powerful and flexible than conventional applications that can respond only to inputs that the programmer had anticipated. In contrast, an AI neural network is not trained how to answer a specific question, but rather how to answer a particular type of question, even if the question itself is new. Without AI, applications couldn’t possibly process infinitely diverse inputs like texts, spoken words or video.
Real-time insights: Since edge technology analyzes data locally rather than in a faraway cloud delayed by long-distance communications, it responds to users’ needs in real time.
Reduced cost: By bringing processing power closer to the edge, applications need less internet bandwidth, greatly reducing networking costs.
Increased privacy: AI can analyze real-world information without ever exposing it to a human being, greatly increasing privacy for anyone whose appearance, voice, medical image or any other personal information needs to be analyzed. Edge AI further enhances privacy by containing that data locally, uploading only the analysis and insights to the cloud. Even if some of the data is uploaded for training purposes, it can be anonymized to protect user identities. By preserving privacy, edge AI simplifies the challenges associated with data regulatory compliance.
High availability: Decentralization and offline capabilities make edge AI more robust since internet access is not required for processing data. This results in higher availability and reliability for mission-critical, production-grade AI applications.
Persistent improvement: AI models grow increasingly accurate as they train on more data. When an edge AI application confronts data that it cannot accurately or confidently process, it typically uploads it so that the AI can retrain and learn from it. So the longer a model is in production at the edge, the more accurate the model will be.
How Does Edge AI Technology Work?
For machines to see, perform object detection, drive cars, understand speech, speak, walk or otherwise emulate human skills, they need to functionally replicate human intelligence.
AI employs a data structure called a deep neural network to replicate human cognition. These DNNs are trained to answer specific types of questions by being shown many examples of that type of question along with correct answers.
This training process, known as “deep learning,” often runs in a data center or the cloud due to the vast amount of data required to train an accurate model, and the need for data scientists to collaborate on configuring the model. After training, the model graduates to become an “inference engine” that can answer real-world questions.
In edge AI deployments, the inference engine runs on some kind of computer or device in far-flung locations such as factories, hospitals, cars, satellites and homes. When the AI stumbles on a problem, the troublesome data is commonly uploaded to the cloud for further training of the original AI model, which at some point replaces the inference engine at the edge. This feedback loop plays a significant role in boosting model performance; once edge AI models are deployed, they only get smarter and smarter.
What Are Examples of Edge AI Use Cases?
AI is the most powerful technology force of our time. We’re now at a time where AI is revolutionizing the world’s largest industries.
Across manufacturing, healthcare, financial services, transportation, energy and more, edge AI is driving new business outcomes in every sector, including:
Intelligent forecasting in energy: For critical industries such as energy, in which discontinuous supply can threaten the health and welfare of the general population, intelligent forecasting is key. Edge AI models help to combine historical data, weather patterns, grid health and other information to create complex simulations that inform more efficient generation, distribution and management of energy resources to customers.
Predictive maintenance in manufacturing: Sensor data can be used to detect anomalies early and predict when a machine will fail. Sensors on equipment scan for flaws and alert management if a machine needs a repair so the issue can be addressed early, avoiding costly downtime.
AI-powered instruments in healthcare: Modern medical instruments at the edge are becoming AI-enabled with devices that use ultra-low-latency streaming of surgical video to allow for minimally invasive surgeries and insights on demand.
Smart virtual assistants in retail: Retailers are looking to improve the digital customer experience by introducing voice ordering to replace text-based searches with voice commands. With voice ordering, shoppers can easily search for items, ask for product information and place online orders using smart speakers or other intelligent mobile devices.
What Role Does Cloud Computing Play in Edge Computing?
AI applications can run in a data center like those in public clouds, or out in the field at the network’s edge, near the user. Cloud computing and edge computing each offer benefits that can be combined when deploying edge AI.
The cloud offers benefits related to infrastructure cost, scalability, high utilization, resilience from server failure, and collaboration. Edge computing offers faster response times, lower bandwidth costs and resilience from network failure.
There are several ways in which cloud computing can support an edge AI deployment:
The cloud can run the model during its training period.
The cloud continues to run the model as it is retrained with data that comes from the edge.
The cloud can run AI inference engines that supplement the models in the field when high compute power is more important than response time. For example, a voice assistant might respond to its name, but send complex requests back to the cloud for parsing.
The cloud serves up the latest versions of the AI model and application.
The same edge AI often runs across a fleet of devices in the field with software in the cloud
Thanks to the commercial maturation of neural networks, proliferation of IoT devices, advances in parallel computation and 5G, there is now robust infrastructure for generalized machine learning. This is allowing enterprises to capitalize on the colossal opportunity to bring AI into their places of business and act upon real-time insights, all while decreasing costs and increasing privacy.
We are only in the early innings of edge AI, and still the possible applications seem endless.