GeForce NOW RTX 3080 One-Month Memberships Now Available

The GeForce NOW RTX 3080 membership gives gamers unrivaled performance from the cloud – with latency so low that it feels just like playing on a local PC.

Today, gamers can experience RTX 3080-class streaming at only $19.99 a month, thanks to GeForce NOW’s new monthly membership plans*.

It’s a great chance to experience powerful cloud gaming with the six games joining the GeForce NOW library this week.

More Power at a Lower Price

Starting today, members have an easier way to try out the next generation of cloud gaming – one month at a time. GeForce NOW RTX 3080 memberships are now available in one-month increments, alongside our existing six-month subscriptions.

1 Month RTX 3080 Membership Wall of Games
With great power comes great gaming. Play with an RTX 3080 membership.

GeForce NOW RTX 3080 memberships boost nearly any device into a powerful gaming rig. Our highest offering is capable of streaming at up to 1440p resolution and 120 frames per second on PCs, native 1440p or 1600p at 120 FPS on Macs, and 4K HDR at 60 FPS on SHIELD TV.

Members maximize their cloud gaming experience with ultra-low latency that goes head-to-head against many local gaming experiences, eight-hour session lengths and full control to customize in-game graphics settings, including RTX ON rendering environments in cinematic quality in supported games.

It’s never been easier to discover why Digital Foundry declared GeForce NOW’s RTX 3080 membership “the best streaming system we’ve played.” Level up your gaming experience with all of the perks that come with the one-month RTX 3080 membership for just $19.99.

Or, sign up for a six-month RTX 3080 membership and get one month free, streaming across devices for $99.99. Founders members who have been with us since the beginning will also receive 10 percent off the subscription price and can upgrade with no risk to their “Founders for Life” benefits.

Check out our membership FAQ for more information.

Did Somebody Say ‘More Video Games?’

Looking for the perfect game to pair with your monthly RTX 3080 membership? Get ready for six more games streaming from the GeForce NOW library.

Distant Worlds 2 on GeForce NOW
The universe is yours to explore in Distant Worlds 2.

Catch the full list of games ready to stream this week:

  • Buccaneers! (New release on Steam)
  • Distant Worlds 2 (New release on Steam)
  • Ironsmith Medieval Simulator (New release on Steam)
  • Bus Driver Simulator (Steam)
  • Martha is Dead (Steam)
  • Survival Quiz CITY (Steam)

Plus, a world comes to an end in Assassin’s Creed Valhalla: Dawn of Ragnarök, the new downloadable content coming to the game today. Unleash new divine powers, embark on a quest through a breathtaking world to save your son and complete a legendary Viking saga, streaming on the cloud.

Finally, the release timing for The Settlers has shifted and will join GeForce NOW at a later date.

Play all of the titles arriving this week and more with the power of RTX 3080 with just a one-month commitment. This week also comes with a question for you. Let us know your answer on Twitter:

*Prices may vary depending on local region. Check GeForceNOW.com for more info.

The post GeForce NOW RTX 3080 One-Month Memberships Now Available appeared first on NVIDIA Blog.

Read More

Storage Specialist Excelero Joins NVIDIA

Excelero, a Tel Aviv-based provider of high-performance software-defined storage, is now a part of NVIDIA.

The company’s team of engineers — including its seasoned co-founders with decades of experience in HPC, storage and networking — bring deep expertise in the block storage that large businesses use in storage-area networks.

Now their mission is to help expand support for block storage in our enterprise software stack such as clusters for high performance computing. Block storage also has an important role to play inside the DOCA software framework that runs on our DPUs.

“The Excelero team is joining NVIDIA as demand is surging for high-performance computing and AI,” said Yaniv Romem, CEO and co-founder of Excelero. “We’ll be working with NVIDIA to ensure our existing customers are supported, and going forward we’re thrilled to apply our expertise in block storage to NVIDIA’s world-class AI and HPC platforms,” he added.

Founded in 2014, Excelero developed NVMesh, software that manages and secures virtual arrays of NVMe flash drives as block storage available across public and private clouds.

Excelero’s software has won praise from users for its high throughput, low latency and support for Kubernetes containers. It’s also attracted collaborations with major cloud service providers.

The company has been an NVIDIA partner since its early days, attracting the former Mellanox, now part of NVIDIA, as an investor. We collaborated on accelerating storage with RDMA, a key technology at the heart of both InfiniBand and RoCE (Ethernet) networks.

NVIDIA will continue to support Excelero’s customers by honoring its contracts. Looking ahead, Excelero’s technology will be integrated into NVIDIA’s enterprise software stack.

We welcome this world-class engineering team to NVIDIA.

The post Storage Specialist Excelero Joins NVIDIA appeared first on NVIDIA Blog.

Read More

GFN Thursday Marches Forward With 21 Games Coming to GeForce NOW This Month

A new month means a whole new set of games coming to GeForce NOW.

Members can look forward to 21 titles joining the GeForce NOW library in March, including day-and-date releases like Shadow Warrior 3, with support for NVIDIA DLSS.

Bring a Katana to a Gunfight

Shoot, slash and slide into Shadow Warrior 3, new this week for GeForce NOW members.

The latest entry from Devolver Digital and Flying Wild Hog is a seamless blend of fast-paced gunplay, razor-sharp melee combat and a new, free-running movement system. Embark on an improbable mission as fallen corporate shogun Lo Wang across nearly all of your devices, including Chromebooks and Macs.

With support for NVIDIA DLSS, RTX 3080 members enjoy the game’s fast-paced action at even faster speeds and cutting-edge performance. Stream every air dash, wall run and katana slice at up to 1440p resolution and 120 frames per second on PCs and native 1440p or 1600p at 120 FPS on Macs for eight awesome hours at a time.

March Madness, But Make It Video Games

ELEX II on GeForce NOW
Reach for the stars and explore a new planet by streaming ELEX II.

Another month packed full of great gaming kicks off with eight games ready to stream this week, followed with the 21 total titles coming to the cloud in March:

  • ELEX II (New release on Steam)
  • FAR: Changing Tides (New release on Steam)
  • Shadow Warrior 3 (New release on Steam)
  • AWAY: The Survival Series (Epic Games Store)
  • Labyrinthine Dreams (Steam)
  • Sins of a Solar Empire: Rebellion (Steam)
  • TROUBLESHOOTER: Abandoned Children (Steam)
  • The Vanishing of Ethan Carter (Epic Games Store)

Also coming in March:

  • Buccaneers! (New release on Steam, March 7)
  • Ironsmith Medieval Simulator (New release on Steam, March 9)
  • Distant Worlds 2 (New release on Steam, March 10)
  • Monster Energy Supercross – The Official Videogame 5 (New release on Steam, March 17)
  • The Settlers (New release on Ubisoft Connect, March 17)
  • Syberia: The World Before (New release on Steam and Epic Games Store, March 18)
  • Lumote: The Mastermote Chronicles (New release on Steam, March 24)
  • Turbo Sloths (New release on Steam, March 30)
  • Blood West (Steam)
  • Bus Driver Simulator (Steam)
  • Conan Chop Chop (Steam)
  • Dread Hunger (Steam)
  • Fury Unleashed (Steam)
  • Hundred Days – Winemaking Simulator (Steam)
  • The Legend of Heroes: Trails of Cold Steel II (Steam)
  • Martha is Dead (Steam and Epic Games Store)
  • Power to the People (Steam)
  • Project Zomboid (Steam)
  • Rugby 22 (Steam)

Get Your Fill of Games From February

On top of the 30 titles announced in February, a few extra found their way to the GeForce NOW library. Catch the additional games that joined last month:

We also announced that Two Worlds Epic Edition would be coming to GeForce NOW. At this time, the title is no longer coming to the service.

What are you planning to play this weekend? Could it be something new, or is your back catalog calling? Let us know on Twitter:

The post GFN Thursday Marches Forward With 21 Games Coming to GeForce NOW This Month appeared first on NVIDIA Blog.

Read More

Beyond Be-leaf: Immersive 3D Experience Transports Audiences to Natural Worlds With Augmented Reality

Imagine walking through the bustling streets of London’s Piccadilly Circus, when suddenly you’re in a tropical rainforest, surrounded by vibrant flowers and dancing butterflies.

That’s what audiences will see in the virtual world of The Green Planet AR Experience, an interactive, augmented reality experience that blends physical and digital worlds to connect people with nature.

During the Green Planet AR Experience, powered by EE 5G, visitors are led through a living rainforest and six distinct biomes by a 3D hologram of Sir David Attenborough, familiar to many as the narrator of some of the world’s most-watched nature documentaries.

All images courtesy of Factory 42.

Audiences engage and interact with the plant life by using a mobile device, which acts as a window into the natural world.

To bring these virtual worlds to life in a sustainable way, award-winning studio Factory 42 combined captivating storytelling with cutting-edge technology. Using NVIDIA RTX and CloudXR, the creative team elevated the AR experience and delivered high-fidelity, photorealistic virtual environments over a 5G network.


Natural, Immersive AR Over 5G — It’s a Stream Come True

The Green Planet AR Experience’s mission is to inspire, educate and motivate visitors toward positive change by showcasing how plants are vital to all life on earth. Through the project, Factory 42 and the BBC help audiences gain a deeper understanding of ecosystems, the importance of biodiversity and what it means to protect our planet.

To create an immersive environment that captured the rich, vivid colors and details of natural worlds, the Factory 42 team needed high-quality imagery and graphics power. Using mobile edge computing allowed them to deliver the interactive experience to a large number of users over EE’s private 5G network.

The AR experience runs on a custom, on-premises GPU edge-rendering stack powered by NVIDIA RTX 8000 professional GPUs. Using NVIDIA RTX, Factory 42 created ultra-high-quality 3D digital assets, environments, interactions and visual effects that made the natural elements look as realistic as possible.

With the help of U.K.-based integrator The GRID Factory, the GPU edge-rendering stack is connected to EE’s private 5G network using the latest Ericsson Industry Connect solution for a dedicated wireless cellular network. Using NVIDIA RTX Virtual Workstation (RTX vWS) on VMware Horizon, and NVIDIA’s advanced CloudXR streaming solution, Factory 42 can stream all the content from the edge of the private 5G network to the Samsung S21 mobile handsets used by each visitor.

“NVIDIA RTX vWS and CloudXR were a step ahead of the competitive products — their robustness, ability to fractionalize the GPU, and high-quality delivery of streamed XR content were key features that allowed us to create our Green Planet AR Experience as a group experience to thousands of users,” said Stephen Stewart, CTO at Factory 42.

The creative team at Factory 42 designed the content in the AR environment, which is rendered in real time with the Unity game engine. The 3D hologram of Sir David was created using volumetric capture technology provided by Dimension Studios. Spatial audio provides a surround-sound setup, which guides people through the virtual environment as digital plants and animals react to the presence of visitors in the space.

Combining these technologies, Factory42 created a new level of immersive experience — one only made possible through 5G networks.

“NVIDIA RTX and CloudXR are fundamental to our ability to deliver this 5G mobile edge compute experience,” said Stewart. “The RTX 8000 GPU provided the graphics power and the NVENC support required to deploy into an edge rendering cluster. And with CloudXR, we could create robust connections to mobile handsets.”

Sustainability was considered at every level of construction and operation. The materials used in building The Green Planet AR Experience will be reused or recycled after the event to promote circularity. And combining NVIDIA RTX and CloudXR with 5G, Factory 42 can give audiences interactive experiences with hundreds of different trees, plants and creatures inside an eco-friendly, virtual space.

Experience the Future of Streaming at GTC

Learn more about how NVIDIA is helping companies create unforgettable immersive experiences at GTC, which runs from March 21-24.

Registration is free. Sign up to hear from leading companies and professionals across industries, including Factory 42, as they share insights about the future of AR, VR and other extended reality applications.

And watch the keynote address by NVIDIA CEO Jensen Huang, on March 22 at 8 a.m. Pacific, to hear the latest news on NVIDIA technologies.

The post Beyond Be-leaf: Immersive 3D Experience Transports Audiences to Natural Worlds With Augmented Reality appeared first on NVIDIA Blog.

Read More

Podsplainer: What’s a Recommender System? NVIDIA’s Even Oldridge Breaks It Down

The very thing that makes the internet so useful to so many people — the vast quantity of information that’s out there — can also make going online frustrating.

There’s so much available that the sheer volume of choices can be overwhelming. That’s where recommender systems come in, explains NVIDIA AI Podcast host Noah Kravitz.

To dig into how recommender systems work — and why these systems are being harnessed by companies in industries around the globe — Kravitz spoke to Even Oldridge, senior manager for the Merlin team at NVIDIA.

Some highlights, below. For the full conversation we, um, recommend you tune in to the podcast.

Question: So what’s a recommender system and why are they important?

Oldridge: Recommender systems are ubiquitous, and they’re a huge part of the internet and of most mobile apps, and really, most places have interaction that a person has with a computer. A recommender system, at its heart, is a system for taking the vast amount of options available in the world and boiling them down to something that’s relevant to the user in that time or in that context.

That’s a really significant challenge, both from the engineering side and the systems and the models that need to be built. Recommender systems, in my mind, are one of the most complex and significant machine learning challenges of our day. You’re trying to represent what a user like a real live human person is interested in at any given moment. And that’s not an easy thing to do, especially when that person may or may not know what they want.

Question: So broadly speaking, how would you define a recommender system?

Oldridge: A recommender system is a sort of machine learning algorithm that filters content. So you can query the recommender system to narrow down the possible options within a particular context. The classic view that most people have with recommender systems is online shopping, where you’re browsing for a particular item, and you’re seeing other items that are potentially useful in that same context or similar content. And, with sites like Netflix and Spotify and content distributors, you’re seeing content based on the content that you’ve viewed in the past. The recommender system’s role is to try and build a summary of your interests and try to come up with the next relevant thing to show you.

Question: In these different examples that you talked about, do they generally operate the same way across different shopping sites or content sites that users might go to? Or are there different ways of approaching the problem?

Oldridge: There are patterns to the problem, but it’s one of the more fragmented industries, I think. If you look at things like computer vision or natural language processing, there are open-source datasets that have allowed for a lot of significant advancements in the field and allowed for standardization and benchmarking, and those fields have become pretty standardized because of that. In the recommender system space, the interaction data that your users are generating is part of the core value of your company, so most companies are reticent to reveal that data. So there aren’t a lot of great public recommender system datasets out there.

Question: What are you doing at NVIDIA? How did NVIDIA get into the business of recommender systems? What role does your team play?

Oldridge: Why NVIDIA is interested in the business of recommender systems, is, to quote Jensen [Huang, NVIDIA’s CEO], that recommender systems are the most important algorithm on the internet, and they drive a lot of the financial and compute decisions that are being made. For NVIDIA, it’s a very interesting machine learning workload that I think previously has been more applicable to the CPU or has been done more on the CPU.

We’ve gotten to a place where recommender systems on GPUs make a ton of sense. There aren’t many people who are trying to run large natural language processing models or large computer vision models on a CPU. For recsys, there’s a lot of people still focused on the CPU-based solutions. There’s a strong motivation for us to get this right because we have a vested interest in selling GPUs, but beyond that, there’s a similar degree of acceleration that’s possible that led to the revolutions that have happened in computer vision and NLP. When things can happen 10 times faster, then you’re able to do much more exploration and much more diving into the problem space. The field begins to take off in a way that it hasn’t before, and that’s something that our team is really focused on: How we can enable these teams to develop recommender systems much more quickly and efficiently, both from the compute time perspective and making sure that you can develop features and train models and deploy to production really quickly and easily.

Question: How do you judge the effectiveness of a recommender system?

Oldridge: There’s a wide variety of factors that are used to determine and compare the effectiveness, both offline when you’re developing the model and trying to evaluate its performance, and online when you’re running the model in production and the model serving the customer. There’s a lot of metrics in that space. I don’t think there’s any one clear answer of what you need to measure. I think a lot of different companies are, at the heart of it, trying to measure longer-term user engagement, but that’s a very lagging signal. So you need to tie it to metrics that are much more immediate, like interaction and clicks on items, etc.

Question: Could one way to judge it potentially be something like the number of Amazon purchases a user ends up returning?

Oldridge: I think many companies will track those and other forms of user engagement, which can be both positive and negative. Those cost the company a lot, so they’re probably being weighed. It’s very difficult to trace those all the way back to an individual recommendation right at the start. It becomes one of the most interesting and complex challenges — at its heart, a recommender system is trying to model your preference. The preference of a human being in the world is based on their myriad of contexts. That context can change in ways that the model has no idea about. For example, on this podcast, you could tell me about an interesting book that you’ve read and that will lead me to look it up on Amazon and potentially order it. That recommendation from you as a human is you using your context that you have about me and about our conversation and about a bunch of other factors. The system doesn’t know that conversation has happened, so it doesn’t know that that’s kind of something that’s of particular relevance to me.

Question: Tell us about your background?

Oldridge: I did a Ph.D. in computer vision from the University of British Columbia here in Vancouver where I live, and that was pre deep learning. Everything that I did in my Ph.D. could be summarized in probably five lines of TensorFlow at this point.

I went from that role to a job at Plenty of Fish, an online dating site. There were about 30 people when I joined, and the founder had written all of the algorithms that were doing the recommendations. I was the first data science hire and built that up. It was recommending humans to humans for online dating. It’s a very interesting space; it’s a funny one in the sense that the users are the items, and it’s reciprocal. It was a very interesting place to be — you leave work on a Friday night and head home, and there were probably 50,000 or 100,000 people out on a date because of an algorithm. It’s very strange to think about the number of potential new humans that are in the world — marriages, whatever else that just happened — because of these algorithms.

It was interesting, and the data was driving it all. It was my first foray into recommender systems. Then, I left Plenty of Fish and went into deep learning and fast AI. I took six months off when I left Plenty of Fish after it was sold to Match Group, and I spent time really getting into deep learning, which I hadn’t spent any time on, and I got deep into that through the fast AI course.

Question: You mentioned that NVIDIA has some tools available to make it easier for smaller organizations. For anybody who wants to build a recommender system, do you want to speak to any of the specific things that are out there? Or maybe tools that you’re working on with the Merlin team that folks can use?

Oldridge: The Merlin team consists largely of people like myself, who’ve built recommender systems in production in the past and understand the pain of it. It’s really hard to build a recommender system.

We’re working on three main premises:

  • Make it work: We want to have a framework that provides end-to-end recommendations, so you can complete all the different stages and all the things you need.
  • Make it easy: It should be straightforward to be able to do things that are commonly done within the space. We’re really thinking about issues such as, “Where was a pain point in our past where it was a real challenge to use the existing tooling? And how can we smooth that pain point over?”
  • Make it fast: At NVIDIA, we want to make sure that this is performance, at scale, and how these things scale is an incredibly important part of the problem space.

Question: Where do you see the space headed over the next couple of years?

Oldridge: What we’re hoping to do with Merlin is provide a standard set of tools and a standard framework that everyone can use and think about to be able to do accelerated recommender systems. Especially as we accelerate things by a 10x factor or more as it changes the pattern.

One of my favorite diagrams that I saw since joining NVIDIA was the diagram of the developer, who, at the start of their day, gets a coffee and then starts [a data science project] and then goes to get another coffee because that takes so long and kind of just this back and forth, you know, drinking six or 10 cups. And, getting to the point where you know, they can do basically three things in a day.

It hits home because that was me a couple years ago, and I was personally facing that. It was so frustrating because I wanted to get stuff done, but because of that lagging signal, you’re not nearly as effective when you get back to it to try and dig in. Seeing the difference that it makes when it gets to that 1-to-3-minute cycle where you’re running something and getting the results and running something and getting the results and you get into that flow pattern, you’re really able to explore things quickly. Then you get to the point where you’re iterating so quickly, and then you can start leveraging the parallelization that happens on the GPU and begin to scale things up.

Question: For the folks who want to find out more about the work you’re doing and what’s going on with Merlin, where can they go online to learn more or dig a little deeper into some white papers and some of the more technical aspects of the work?

Oldridge: A great starting point is our GitHub. We linked out to a bunch of our papers there. I have a bunch of talks that I’ve given about Merlin across various sources. If you search for my name in YouTube, or for Merlin recommender systems, there’s a lot of information you can find out there.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey

 

The post Podsplainer: What’s a Recommender System? NVIDIA’s Even Oldridge Breaks It Down appeared first on NVIDIA Blog.

Read More

What Is GauGAN? How AI Turns Your Words and Pictures Into Stunning Art

GauGAN, an AI demo for photorealistic image generation, allows anyone to create stunning landscapes using generative adversarial networks. Named after post-Impressionist painter Paul Gauguin, it was created by NVIDIA Research and can be experienced free through NVIDIA AI Demos.

How to Create With GauGAN

The latest version of the demo, GauGAN2, turns any combination of words and drawings into a lifelike image. Users can simply type a phrase like “lake in front of mountain” and press a button to generate a scene in real time. By tweaking the text to a “lake in front of snowy mountain” or “forest in front of mountain,” the AI model instantly modifies the image.

Artists who prefer to draw a scene themselves can use the demo’s smart paintbrush to modify these text-prompted scenes or start from scratch, drawing in boulders, trees or fluffy clouds. Clicking on a filter (or uploading a custom image) allows users to experiment with different lighting or apply a specific painting style to their creations.

AI Behind the GauGAN2 Demo

At the heart of GauGAN2 are generative adversarial networks, or GANs — a kind of deep learning model that involves a pair of neural networks: a generator and a discriminator. The generator creates synthetic images. The discriminator, trained on millions of real landscape images, gives the generator network pixel-by-pixel feedback on how to make the synthetic images more realistic.

Over time, the GAN model learns to create convincing imitations of the real world, with mountains reflected in AI-generated lakes and trees losing their leaves when a scene is modified with the word “winter.”

Landscape generated by GauGAN2

When users draw their own doodle or modify an existing scene in the GauGAN2 demo, they’re working with segmentation maps — high-level outlines that record the location of objects in a scene. Individual areas are labeled with features like sand, river, grass or flower, giving the AI model instructions on how to fill in the scene.

GauGAN has been wildly popular since it debuted at NVIDIA GTC in 2019 — it’s been used by art teachers in schools, in museums as an interactive art exhibit and by millions online.

Art directors and concept artists from top film studios and video game companies had been among the creative professionals interested in GauGAN as a tool to prototype ideas for their work. So NVIDIA Studio, a platform to assist creators, came out with a desktop application: NVIDIA Canvas.

NVIDIA Canvas brings the technology behind GauGAN to professionals in a format compatible with existing tools like Adobe Photoshop, and lets artists use NVIDIA RTX GPUs for a more fluid, interactive experience.

To learn more about the AI behind GauGAN, register free for NVIDIA GTC and tune in to the session “Expressing Your Imagination with GauGAN2,” Thursday, March 24, at 10 a.m. Pacific.

NVIDIA GTC runs online March 21-24. To hear the latest in AI research, tune in to the keynote address by NVIDIA CEO Jensen Huang on March 22 at 8 a.m. Pacific.

The post What Is GauGAN? How AI Turns Your Words and Pictures Into Stunning Art appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: 3D Creator Makes Fine Art for Digital Era Inspired by Silk Road Masterpieces

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

Within the Mogao Caves, a cultural crossroads along what was the Silk Road in northwestern China, lies a natural reserve of tens of thousands of historical documents, paintings and statues of the Buddha.

And nearly 2,000 miles away, in eastern China, 3D artist Ting Song has brought one of these statues to life — with the help of NVIDIA Omniverse, a physically accurate 3D design collaboration platform available with RTX-powered GPUs and part of the NVIDIA Studio suite for creators.

Ting Song

The Forbes 30 under 30 artist explores the concept of fine art in the digital era, blending AI with traditional art, poetry and drama.

Song, who divides her time between Beijing and Shanghai, created the first digital art piece that was auctioned by traditional art houses across China — a work called “Peony Dream,” inspired by the classic Chinese play The Peony Pavilion.

She uses Adobe After Effects and Photoshop, Blender, and Unity software with Omniverse to vivify her work.

Song’s ‘Peony Dream’ digital art piece

Accelerating Art-ificial Intelligence

An avid hackathon-goer growing up, Song has shared her love of cutting-edge, open-source technology by hosting hackathons in more than a dozen countries.

She saw a multitude of groundbreaking uses for technology at these events — and was particularly spurred to use AI as a tool to foster art and creativity.

Her recent works of AI-based, immersive, multidimensional art focus on portraying philosophical and aesthetic themes from traditional Chinese culture.

For her piece that reimagines the Buddha statue, Song used Adobe software to create its layers and NVIDIA StyleGAN2 to synthesize the colors of the murals in the Mogao Caves — before bringing it into Omniverse to “let it dance,” she said.

“My work aims to give traditional art forms new life, as many existing cultural creations don’t yet exist in a 3D world, only 2D,” Song said. “NVIDIA Omniverse apps like Kaolin and Audio2Face, and NVIDIA DIB-R models support artists who are switching from traditional creations to owning new experiences in virtual worlds.”

Song uses Kaolin — her favorite Omniverse app — to inspect 3D datasets, visualize 3D outputs of a model and render synthetic datasets. Song imported models and animations from Blender and Unity into Omniverse.

And with Omniverse Audio2Face, an app that quickly generates expressive facial animation from just an audio source, Song animated a virtual poet character that she plans to integrate with her “Peony Dream” piece.

In Song’s following demo, a digital human recites a Chinese poem written by AI: “Spring is still lingering when swallows come / Strings of rain and slanting wind / Which trees are kissed upon / Stringed instruments flourish in the bloom of youth / The sun shines, and the lyric flows.”

“Digging into our true humanistic power by designing an artistic concept based on a play or poem — and then productizing it using the proper technological tools — is all enabled by Omniverse,” Song said.

In addition to revitalizing traditional works, Song often writes her own poems or scripts off of which she bases stunning visual representations made in Omniverse.

The rapid iteration and collaboration capabilities of the open-source Omniverse ecosystem and the power of NVIDIA RTX technology — which save her months’ worth of model training time — provide Song with “inspiration and technical confidence” for her artistic endeavors, she said.

“I hope my work inspires people to dive deeper into their traditional cultural heritage — and encourages them to use AI as a tool to help reveal the unique creative talents they have as human beings,” Song said.

Learn More at GTC

Song’s work will go on display in the AI Art Gallery and AI Playground at GTC, which runs March 21-24. The virtual conference is free to attend and will have dozens of sessions and special events featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity, Walt Disney Studios and more.

Creatives will also have the opportunity to connect with one another and get a behind-the-scenes look at the Omniverse roadmap in the NVIDIA Omniverse User Group and Developer Days.

Creators and developers can download NVIDIA Omniverse for free and get started with step-by-step tutorials on the Omniverse YouTube channel. Follow Omniverse on Instagram, Twitter and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Meet the Omnivore: 3D Creator Makes Fine Art for Digital Era Inspired by Silk Road Masterpieces appeared first on NVIDIA Blog.

Read More

Talking the Talk: Retailer Uses Conversational AI to Help Call Center Agents Increase Customer Satisfaction

With more than 11,000 stores across Thailand serving millions of customers, CP All, the country’s sole licensed operator of 7-Eleven convenience stores, recently turned to AI to dial up its call centers’ service capabilities.

Built on the NVIDIA conversational AI platform, the Bangkok-based company’s customer service bots help call-center agents answer frequently asked questions and track customer orders. The bots understand and speak Thai with 97 percent accuracy, according to Areoll Wu, deputy general manager of CP All.

This kind of innovation is a key value for CP All, which partners with several industry groups and national agencies on an annual awards program to encourage research and entrepreneurship in Thailand.

CP All’s system uses NVIDIA DGX systems and the NVIDIA NeMo framework for natural language processing training, and the NVIDIA Triton Inference Server for AI inference and model deployment.

Keeping Up With the Calls

CP All’s 7-Eleven call centers manage customer inquiries in many business domains — including e-commerce, finance and retail — which each have area-specific expert representatives. The centers typically get nearly 250,000 calls a day, according to Kritima Klomnoi, project manager at Gosoft, a subsidiary of CP All.

“Reducing hold time for customers is a key measure of our service performance,” Klomnoi said. “NVIDIA technologies offer us a 60 percent reduction in the call load that human agents must handle, allowing employees to efficiently tackle more unique and complex problems raised by customers.”

Using AI-driven automatic speech recognition services, CP All’s customer phone calls are transcribed in real time. When a customer service bot recognizes a question based on the NVIDIA-powered intelligent FAQ system, it immediately provides an answer using text-to-speech technologies.

Otherwise, the AI quickly analyzes and routes calls to the appropriate employee who can assist in resolving the query in its specific business domain. CP All has also automated all e-commerce order-tracking inquiries using AI.

Adapting to the Thai Language

When first exploring conversational AI, the CP All team faced the challenge of getting the model to recognize the nuances of the Thai, Wu said.

Standard Thai uses 21 consonants, 18 pure vowel sounds, three diphthongs and five tones — making it a complex language. NVIDIA NeMo — a framework for building, training and fine-tuning GPU-accelerated speech and natural language understanding models — helped CP All work through the intricacies.

“The toolkit’s pretrained models and tools made the process of deploying our service much less daunting,” said Wu. “With the help of NeMo, we were able to quickly build and improve our AI language models, which are now optimized to understand and speak the unique Thai language.”

According to Wu, the NeMo framework enabled a 97 percent accuracy in CP All’s Thai language models, more than tenfold the accuracy achieved previously.

Looking forward, CP All plans to expand its AI services to more business domains and scale to millions of concurrent sessions on NVIDIA GPU inference architecture.

Learn more at CP All’s panel at GTC, running March 21-24.

The post Talking the Talk: Retailer Uses Conversational AI to Help Call Center Agents Increase Customer Satisfaction appeared first on The Official NVIDIA Blog.

Read More

How to Make the Most of GeForce NOW RTX 3080 Cloud Gaming Memberships

This is, without a doubt, the best time to jump into cloud gaming.

GeForce NOW RTX 3080 memberships deliver up to 1440p resolution at 120 frames per second on PC, 1600p and 120 FPS on Mac, and 4K HDR at 60 FPS on NVIDIA SHIELD TV, with ultra-low latency that rivals many local gaming experiences.

All RTX 3080 members will experience benefits of the new service level by default — reduced latency, longer session lengths, smoother streams, dedicated access to a high-performance cloud gaming rig — and there are additional ways to make the most of your membership.

Understanding Resolution and FPS

Today’s PC gaming visuals are nothing short of extraordinary. Advancements in ray tracing simulate lighting and shadows to create stunning, photographic scenes, resulting in realism and deeper gaming immersion.

Resolution is the size of the image, measured in pixels. A pixel is the smallest physical point on a display, the building block of any onscreen visual. A larger number of pixels, or “higher resolution,” delivers sharper details and visuals that can accommodate a wider variety of colors, leading to stunning graphics.

Standard HD monitors are 1080p resolution, 1920 pixels wide by 1080 pixels tall. Displays with 1440p, aka 2K screens, are 2560 x 1440 and contain 4x more pixels than HD for incredible graphical fidelity. Some newer Macbooks have 1600p resolution displays — a 2560 × 1600 pixel count.

FPS measures the number of times an image is rendered or redrawn per second onscreen by the graphics card.

Refreshes must be extremely quick to represent fluid, smooth movement. Key frame rates include 30, 60 and 120 FPS. These thresholds are leaps in performance that have matched new generations of displays.

How come every game can’t play at 4K resolution and 120 FPS? Simply put, there are trade-offs.

GPUs and CPUs working in tandem perform a variety of tasks, such as rendering graphics, particle effects like explosions, and visual effects, all of which become exponentially harder when graphical settings are maximized.

Most game developers prioritize delivering buttery-smooth graphics by perfecting frame rate. From there, they increase resolution and FPS for the best possible visual experience.

With a handful of hardware checks, you can unlock the maximum resolution and FPS with a GeForce NOW RTX 3080 membership, beginning with the display.

Setup the Display to Unlock 1440p (or 1600p) at 120 FPS

Start by maximizing the display resolution.

On most Windows PCs, click the Start button, Control Panel, and then, under Appearance and Personalization, select Adjust screen resolution. Then, click the drop-down list next to Resolution, move the slider to the highest resolution and click Apply. On Mac, choose the Apple menu, then System Preferences > Displays > Scaled, then choose the highest resolution.

Next, unlock a maximum of 120 FPS.

While some games are capable of 120 FPS, the display needs the capability to refresh just as fast. This is measured in Hertz. The higher the display Hertz, or Hz, the smoother and more responsive gameplay feels.

60Hz is great for everyday computing tasks and the standard for gaming. The 120Hz threshold offers extraordinary visual quality that all gamers should experience.

Some displays have a refresh rate higher than 120Hz. On these displays, members will still stream at up to 120 FPS. But the higher refresh rate helps lower click-to-pixel latency, which refers to the amount of time it takes from a physical action, like pressing a controller button in a soccer game, to when it’s reflected on the screen, like the player attempting a shot.

Lower click-to-pixel latency adds responsiveness to fast-paced games and is especially critical for competitive gamers, where milliseconds can be the fine margin separating victory and defeat.

Members have the option to play at 120 FPS on a laptop with a 120Hz display, such as the newly announced MacBook Pros, or connect to a compatible 120Hz+ display, such as NVIDIA G-SYNC monitors.

To change the refresh rate on PC, click the Start button, then Settings  > System  > Display  > Advanced display settings, selecting Refresh rate and the desired Hertz. On Mac, select the Apple menu, access System Preferences, click Displays, navigate the Refresh Rate pop-up menu and choose the requisite Hertz. For further details, visit Windows Central or Apple Support.

To connect to a 120Hz+ display, check the laptop for a compatible video port.

One of the following will work for PC: USB-C (DisplayPort or Thunderbolt), HDMI (1.4 for 1080p, or HDMI 2.0 or later for 1440p) DisplayPort 1.2 and Mini DisplayPort 1.2.

On Mac, look for a USB-C, Thunderbolt (1/2/3), HDMI (1.4 for 1080p, or HDMI 2.0 or later for 1440p), Mini DisplayPort 1.2 or USB 4 port.

Next, look for a compatible port on the monitor. Any of the above will do.

Identifying ports can be tricky, but Digital Trends and Apple have useful articles for PC and Mac, respectively.

To finish, simply acquire the necessary cable and connect devices by plugging them in.

Ready for GeForce RTX 3080 graphical goodness.

Combinations of cables and connections will work, like HDMI to HDMI or USB-C to DisplayPort. However, performance may vary slightly.

According to testing, the optimal connection to maximize graphics performance on laptops is USB-C to DisplayPort. With Mac, a USB-C (Thunderbolt) adaptor to Thunderbolt to DisplayPort connection works best.

Some laptops can connect with a simple docking station or a hub, but double check as not all can output maximum resolution and FPS.

For a complete guide of compatible connectors, read our support articles for Windows and macOS.

Maximize Streaming Settings on PC and Mac

With hardware sorted out, adjust streaming settings.

GeForce NOW has a convenient built-in network test that automatically detects the best streaming settings, but only up to 1080p at 60 FPS. This happens in the cloud, so gamers get a great experience and can jump in to play immediately. If the test determines an internet connection can’t maintain resolution, GeForce NOW may select a lower resolution.

For maximum resolution, open the GeForce NOW app, go to Settings, Streaming Quality and select Custom. This will open the Details drop-down menu where Resolution can be adjusted to match the maximum size — 1440p for PC and iMacs, and 1600p on select Macbooks.

Cranking FPS up to 120 requires slight modifications.

  • Open GeForce NOW, go to Settings, Streaming Quality and select Custom mode.
  • Change Max bit rate to Auto (our recommendation) or select the desired value.
  • Set Frame Rate to 120 FPS or higher. If the display can output 144Hz, set to 144 FPS.
  • Select VSync and choose Adaptive for good latency and smooth streaming. Turning Vsync off may lower latency even further, but may cause video tearing during gameplay.
Users may be tempted to set frame rates to 120 FPS with 60Hz or less displays, which will reduce latency but result in tearing on screen. Choose the experience you most prefer. It may vary by game.

Game settings in GeForce NOW are automatically optimized for 1440p and 120 FPS in our most popular games. Changes shouldn’t be needed to get the best in-game experience.

Feel free to change settings as needed. To save custom in-game graphics, open Settings and turn on IN-GAME GRAPHICS SETTINGS. Otherwise, graphics will revert back to recommended settings.

Maximizing display settings will differ depending on the platform.

Streaming on SHIELD TV With 4K HDR

GeForce NOW members can play PC games with 4K resolution and HDR at 60 FPS in exceptional 5.1 or 7.1 surround sound on SHIELD TV.

Cyberpunk 2077. Image curiosity of CD PROJEKT RED.

Setup is quick and easy. On a TV that is 4K-HDR-compatible, access the menu, click Settings > Stream Quality and run a network test.

GeForce NOW optimizes graphics settings for the most popular games to stream 4K HDR at 60 FPS on SHIELD. No changes to settings are needed. Similar to PC and Mac, settings can be saved with the Save my changes feature.

To play HDR-compatible games, look for the HDR badge. The GeForce NOW team configures HDR games individually, and will continue to onboard new titles in the coming months.

Look for the HDR and RTX badges.

GeForce NOW provides gamers the freedom to save games in the cloud, then pick up and play on the go, retaining a true PC experience.

GeForce NOW at 120 FPS on Android Mobile Devices 

GeForce NOW RTX 3080 memberships stream at up to 120 FPS on select Android devices.

Supported phones include the Google Pixel 6 Pro and Samsung S20 FE EG, S21, S21+, S21 Ultra and Note20 Ultra 5G, with plans to add new phones and tablets over time.

Gaming on the go never looked this good.

Not all phones enable 120Hz by default. It’s the maximum performance, which some users don’t need, and can reduce battery life.

Enable 120Hz for Samsung devices by following these instructions — or check out the phone’s user guide.

With game settings adjusted, it’s game time, hoop!

With Great Internet Comes Ultra-Low Latency

As a cloud gaming service, GeForce NOW benefits from a strong internet connection.

PC and Mac need at least 35mbps for streaming up to 1440p or 1600p at 120 FPS. SHIELD TV requires 40mbps for 4K HDR at 60 FPS. And Android requires 15mbps for 720p at 120 FPS, or 25mbps for 1080p at 120 FPS.

We strongly recommend a hardwired Ethernet connection. A 5GHz WiFi connection also provides a great gaming experience.

Check out the recommended routers automatically optimized for GeForce NOW.

Next, run the GeForce NOW in-app network test to measure latency by opening GeForce NOW, navigating to Server Location and selecting Test Network.

Internet service provider and proximity to the closest server are two variables that can impact the overall experience — and are likely out of most members’ control. But there are ways to improve the home network setup for a top-tier cloud gaming experience.

Read more about reducing latency in cloud gaming.

Get Gaming in GeForce RTX 3080

Interested in RTX 3080 in the cloud? Be sure to check the regional availability website to confirm memberships are available in your country and determine if the server closest to you has been upgraded to the new GeForce NOW RTX 3080 cloud gaming rigs.

If not, run a network test to the closest RTX 3080-enabled server to determine if the ping time is acceptable.

Now that you’ve mastered the basics, it’s time to give the cloud everything you’ve got. Ready? Begin.

Follow GeForce NOW on Facebook and Twitter, and check out the GeForce NOW blog every GFN Thursday, to stay up to date on the latest features and game launches.

The post How to Make the Most of GeForce NOW RTX 3080 Cloud Gaming Memberships appeared first on The Official NVIDIA Blog.

Read More

Guinness World Record Awarded for Fastest DNA Sequencing — Just 5 Hours

Guinness World Records this week presented a Stanford University-led research team with the first record for fastest DNA sequencing technique — a benchmark set using a workflow sped up by AI and accelerated computing.

Achieved in five hours and two minutes, the DNA sequencing record can allow clinicians to take a blood draw from a critical-care patient and reach a genetic disorder diagnosis the same day. The recognition was awarded by a Guinness World Records adjudicator Wednesday at Stanford University’s Jen-Hsun Huang Engineering Center, named for NVIDIA’s founder and CEO, a Stanford alumnus.

The landmark study behind the world record was led by Dr. Euan Ashley, professor of medicine, of genetics and of biomedical data science at the Stanford School of Medicine. Collaborators include researchers from Stanford, NVIDIA, Oxford Nanopore Technologies, Google, Baylor College of Medicine and the University of California at Santa Cruz.

Caption: An adjudicator from Guinness World Records presented the record to the project’s collaborators this week. Image credit: Steve Fisch, courtesy of Stanford University.

“I think we are in unanimous agreement that this is nothing short of a miracle,” said Kimberly Powell, vice president of healthcare at NVIDIA, at the event. “This is an achievement that did go down in the history books, and will inspire another five and 10 years of fantastic work in the digital biology revolution, in which genomics is driving at the forefront.”

Diagnosing With a Genome in Record Time

The researchers achieved the record speed by optimizing every stage of the sequencing workflow. They used high-throughput nanopore sequencing on Oxford Nanopore’s PromethION Flow Cells to generate more than 100 gigabases of data per hour, and accelerated base calling and variant calling using NVIDIA GPUs on Google Cloud. A gigabase is one billion nucleotides.

“These innovations don’t come from one individual, or even one team,” said Greg Corrado, distinguished scientist at Google Research, at the event. “It really takes this group of people coming together to solve these problems.”

To accelerate every step — from Oxford Nanopore’s AI base calling to variant calling, where scientists identify the millions of variants in a genome — the researchers relied on the NVIDIA Clara Parabricks computational genomics application framework. They used a GPU-accelerated version of PEPPER-Margin-DeepVariant, a pipeline developed by Google and UC Santa Cruz’s Computational Genomics Laboratory.

“I believe that the innovations that we’ll see in biology and medicine in the coming century are going to depend on this kind of collaboration much more than the siloed R&D centers of the past,” Corrado said.

New Possibilities for Patient Care

Ultra-rapid genome sequencing isn’t about setting world records. Cutting down the turnaround for a genetic diagnosis from a couple weeks to just a few hours can provide doctors with rapid answers needed to treat critical care patients, where every second counts.

And, as the technology becomes more accessible, more hospitals and research centers will be able to use whole genome sequencing as a critical tool for patient care.

“Genomics is still at the beginning — it’s not the standard of care,” said Powell. “I believe we can help make it part of the standard by reducing the cost and the complexity and democratizing it.”

Not content with the five-hour record, the team is already exploring ways to decrease the DNA sequencing time even further.

“There’s one promise we will make. We will smash this record very quickly in collaboration with Euan and his team, and NVIDIA and Google,” said Gordon Sanghera, CEO of Oxford Nanopore Technologies.

Hear more about this research from Dr. Euan Ashley by registering free for NVIDIA GTC, where he’ll present a talk titled “When Every Second Counts: Accelerated Genome Sequencing for Critical Care” on Tues., March 22 at 2 p.m. Pacific.

Subscribe to NVIDIA healthcare news here.

The post Guinness World Record Awarded for Fastest DNA Sequencing — Just 5 Hours appeared first on The Official NVIDIA Blog.

Read More