GFN Thursday: All My Friends Know the Outriders

Spring. Rejuvenation. Everything’s blooming. And GeForce NOW adds even more games.

Nature is amazing.

For today’s GFN Thursday, we’re taking a look at all the exciting games coming to GeForce NOW in April.

Starting with today’s launches:

Outriders on GeForce NOW

OUTRIDERS (day-and-date release on Epic Games Store and Steam)

As mankind bleeds out in the trenches of Enoch, you’ll create your own Outrider and embark on a journey across the hostile planet. Check out this highly anticipated 1-3 player co-op RPG shooter set in an original, dark and desperate sci-fi universe.

Members can also look for the following titles later today:

  • Narita Boy (day-and-date release on Steam, March 30)
  • Tales of the Neon Sea (Free on Epic Games Store, April 1-8)
  • A-Train PC Classic (Steam)
  • Endzone – A World Apart (Steam)
  • Forts (Steam)
  • Might & Magic X Legacy (Ubisoft Connect)
  • Mr. Prepper (Steam)
  • Nine Parchments (Steam)
  • Re:ZERO -Starting Life in Another World- The Prophecy of the Throne (Steam)
  • Rhythm Doctor (Steam)
  • Shadowrun: Hong Kong – Extended Edition (Steam)
  • Styx: Master of Shadows (Steam)

April Anticipation

There’s a wide and exciting variety of games coming soon to GeForce NOW:

The Legend of Heroes: Trails of Cold Steel IV (Steam)The long-awaited finale to the epic engulfing a continent comes to a head in the final chapter of the Trails of Cold Steel saga!

R-Type Final 2 (Steam)

The legendary side-scroller is back with beautiful 3D graphics, exhilarating shoot-’em-up gameplay, and a multitude of stages, ships and weapons that will allow you to conduct a symphony of destruction upon your foes.

Turnip Boy Commits Tax Evasion (Steam)

Play as an adorable yet trouble-making turnip. Avoid paying taxes, solve plantastic puzzles, harvest crops and battle massive beasts all in a journey to tear down a corrupt vegetable government!

And that’s not all — check out even more games you’ll be able to stream from the cloud in April:

Oops, We Did It Again

Thronebreaker: The Witcher Tales on GeForce NOW
In March, we added support for the GOG.COM versions of The Witcher series, including Thronebreaker: The Witcher Tales.

In March we said 21 titles were coming to GeForce NOW.

Turns out we added 14 additional games for a grand total of 35. That’s more than one a day.

  • Do Not Feed the Monkeys (Steam)
  • Evoland Legendary Edition (Steam)
  • GoNNER (Steam)
  • Iron Conflict (Steam)
  • Paradise Lost (Steam)
  • Railroad Corporation (Steam)
  • Snooker 19 (Steam)
  • System Shock: Enhanced Edition (Steam)
  • Stronghold: Warlords (Steam)
  • Sword and Fairy 7 Trial (Steam)
  • The Witcher 2: Assassins of Kings Enhanced Edition (GOG.COM)
  • The Witcher 3: Wild Hunt – Game of the Year Edition (GOG.COM)
  • The Witcher Adventure Game (GOG.COM)
  • Thronebreaker: The Witcher Tales (GOG.COM)
  • Wanba Warriors (Steam)

It should be an exciting month, members. What are you going to play? Let us know on Twitter or in the comments below.

The post GFN Thursday: All My Friends Know the Outriders appeared first on The Official NVIDIA Blog.

Read More

All for the ‘Gram: The Next Big Thing on Social Media Could Be a Smarter Camera

Thanks to smartphones you can now point, click, and share your way to superstardom.

Smartphones have made creating—and consuming—digital photos and video on social media a pastime for billions. But the content those phones can create can’t compare to the quality of those made with a good camera.

British entrepreneur Vishal Kumar’s startup Photogram wants to combine the benefits of mirrorless cameras — replete with interchangeable lenses, big sensors, and often complex, fiddly controls — with the smarts and connectivity of your smartphone.

The Alice Camera, due for release in October for around $760 to early backers on crowdfunding platform Indiegogo, is, first of all, a compact, mirrorless camera.

The Alice camera focuses on what cameras do well…

But its machined aluminum body is sleeker than other mirrorless cameras. There’s no onboard screen or viewfinder of any kind — just a shutter button, a control wheel, and a cold-shoe adapter.

Instead, photographers mount their smartphone to the Alice camera. Alice focuses on what smartphones can’t do: it houses a big, light-soaking sensor, and it can be screwed to a wide array of lenses.

Alice links to your smartphone via a 5Ghz Wi-Fi connection — so your smartphone can do what traditional cameras can’t do. An app on your phone provides not just an easily updatable software interface on the smartphone’s big, bright screen, but the connectivity to easily share images and stream video.

The secret ingredient: an AI built using NVIDIA GPU accelerated deep learning to help photographers wring the most out of Alice’s hardware.

A Star Is Born

Kumar, whose startup is part of NVIDIA Inception, an acceleration program that offers go-to-market support, expertise and technology for AI, data science and HPC startups, sees the opportunity for this device literally staring everyone in the face.

There are more than 1 billion active Instagram users, more than 1 billion active TikTok users, and more than 2.3 billion YouTube users. Social media influencers who can deliver great content to all these users become instantaneous global superstars, Kumar explains.

Consider Charli D’Amelio, who has more than 115 million followers, or Addison Rae, with more than 86 million, or YouTube star Mr. Beast, with more than 71 million followers.

Content creators like these may rely, in large part, on smartphone users to provide an audience. Most, however, left their smartphones behind long ago, adopting more sophisticated gear to create their content.

Firsthand Frustration

…and relies on a user’s smartphone, mounted on the back, to do what smartphones do well.

Kumar, a self-described cultural data scientist, learned this firsthand working as a data scientist at Sotheby’s. Great photography was vital to sparking worldwide interest in the auction house’s offerings on social media.

“I was thinking a lot about how data science and machine learning and artificial intelligence could be applied to create video and imagery,” Kumar says. “I was using a camera all the time to create video content and becoming increasingly frustrated with operating them.”

A Camera for Creators

And, for serious content creators, a better camera than the smartphone they already carry around with them is definitely needed.

In part, that’s because you can only capture so much light in the relatively compact sensors crammed into today’s smartphones, Kumar explains. Bigger sensors soak up more light, so they can capture a high-quality image even in very dim lighting conditions, among other benefits.

So Alice is built around a Sony IMX294 10.7 megapixel 4/3 sensor optimized for high-quality and full-width 4K video. The sensor is eight times bigger than that of a typical smartphone.

In front of that big sensor is a Micro Four Third lens mount. The compact interchangeable lens system gives users access to more than 100 lens options from Olympus, Panasonic, and specialty lens makers such as Sigma, Tamron, and Tokina.

Users will be able to choose from 16mm-equivalent fish-eye lenses for a super-wide angle of view or 800mm-equivalent telephone zoom lenses able to get clear, undistorted pictures of objects far away, with plenty of options in between.

Like more traditional cameras, Alice can use a wide variety of lenses.

A Classic Deep Learning Problem

All of this is why professional photographers continue to keep big big, expensive cameras in their toolkit. But the masses of content creators may never get the most out of dedicated cameras, Kumar explains.

Making more expertise more accessible to more people is a classic deep learning problem. And the technology’s roots in computer vision and image recognition make training an AI to operate a camera a natural fit.

To build Alice’s AI, Photogram CTO Liam Donovan trained an NVIDIA GPU accelerated convolutional neural network using millions of out-of-focus and in-focus images, teaching an AI to distinguish between good and bad photos.

Optimized for What You Do

The AI is a crucial element in the end-to-end deep learning image processing pipeline Photogram’s team has built.

The result is an AI that can control and improve focusing, change exposure and automatically adjust white balance, and even perform automatic image stabilization.

Revealed last September and positioned by Kumar as the “AI camera for creators,” in February the project has raised $200,000 from more than 250 backers — 7x Kumar’s original goal.

Users will eventually make Alice better at whatever photography they do, Kumar explains.

“Let’s say you’re a wedding photographer, or you like to shoot cats or clothes,” Kumar says. “We want people to be able to optimize our models and retrain them so their Alice camera can be more optimized for the photography they do.”

Smartphone Synergy

Revealed last September and positioned by Kumar as the “AI camera for creators,” in February the project has raised $200,000 from more than 250 backers — 7x Kumar’s original goal.

To be sure, like all early-stage crowded-funded projects, Alice is still very much a work in progress. The plan, for now, is to offer Alice to early funders on Indiegogo first and to the general public later.

However Alice is received, at first, with billions around the world creating and sharing images every day, sooner or later the idea is bound to click.

 

 

Image credits: Photogram

 

 

The post All for the ‘Gram: The Next Big Thing on Social Media Could Be a Smarter Camera appeared first on The Official NVIDIA Blog.

Read More

Now Hear This: Startup Gives Businesses a New Voice

Got a conflict with your 2pm appointment? Just spin up a quick assistant that takes good notes and when your boss asks about you even identifies itself and explains why you aren’t there.

Nice fantasy? No, it’s one of many use cases a team of some 50 ninja programmers, AI experts and 20 beta testers is exploring with Dasha. And they’re looking for a few good developers to join a beta program for the product that shows what’s possible with conversational AI on any device with a mic and a speaker.

“Conversational AI is going to be [seen as] the biggest paradigm shift of the last 40 years,” the chief executive and co-founder of Dasha, Vlad Chernyshov, wrote in a New Year’s tweet.

Using the startup’s software, its partners are already creating cool prototypes that could help make that prediction come true.

For example, a bank is testing Dasha to create a self-service support line. And the developer of a popular console game is using it to create an in-game assistant that players can consult via a smartwatch on their character’s wrist.

Custom Conversations Created Quickly

Dasha’s development tool lets an average IT developer use familiar library calls to design custom dialogs for any business process. They can tap into the startup’s unique capabilities in speech recognition, synthesis and natural-language processing running on NVIDIA GPUs in the cloud.

“We built all the core technology in house because today’s alternatives have too high a latency, the voice does not sound natural or the level of controls is not flexible enough for what customers want to do,” Chernyshov said.

Dasha platform for conversational AI
NVIDIA GPUs speed up the AI engine in Dasha’s conversational AI platform.

The startup prides itself on its software that both creates and understands speech with natural inflections of emotion, breathing—even the “ums” and “ahs” that pepper real conversations. That level of fluency is helping early users get better responses from programs like Dasha Delight that automates post-sales satisfaction surveys.

Delighting Customers with Conversational AI

A bank that caters to small businesses gave Delight to its two-person team handling customer satisfaction surveys. With automated surveys, they covered more customers and even developed a process to respond to complaints, sometimes with problem fixes in less than an hour.

Separately, the startup developed a smartphone app called Dasha Assistant. It uses conversational AI to screen out unwanted sales calls but put through others like the pizza man confirming an order.

Last year, the company even designed an app to automate contact tracing for COVID-19.

An Ambitious Mission in AI

While one team of developers pioneers such new use cases, a separate group of researchers at Dasha pushes the envelope in realistic speech synthesis.

“We have a mission of going after artificial general intelligence, the ability for computers to understand like humans do, which we believe comes through developing systems that speak like humans do because speech is so closely tied to our intelligence,” said Chernyshov.

Below: Chernyshov demos a customer service experience with Dasha’s conversational AI.

He’s had a passion for dreaming big ideas and coding them up since 2007. That’s when he built one of the first instant messaging apps for Android at his first startup while pursuing his computer science degree in the balmy southern Siberian city of Novosibirsk, in Russia.

With no venture capital community nearby, the startup died, but that didn’t stop a flow of ideas and prototypes.

By 2017 Chernyshov learned how to harness AI and wrote a custom program for a construction company. It used conversational AI to automate the work of seeking a national network of hundreds of dealers.

“We realized the main thing preventing mainstream adoption of conversational AI was that most automated systems were really stupid and nobody was focused on making them comfortable and natural to talk with,” he said.

A 7x Speed Up With GPUs

To get innovations to the market quickly, Dasha runs all AI training and inference work on NVIDIA A100 Tensor Cores and earlier generation GPUs.

The A100 trains Dasha’s latest models for speech synthesis in a single day, 7x faster than previous-generation GPUs. In one of its experiments, Dasha trained a Transformer model 1.85x faster using four A100s than with eight V100 GPUs.

“We would never get here without NVIDIA. Its GPUs are an industry standard, and we’ve been using them for years on AI workflows,” he said.

NVIDIA software also gives Dasha traction. The startup eases the job of running AI in production with TensorRT, NVIDIA code that can squeeze the super-sized models used in conversational AI so they deliver inference results faster with less memory and without losing accuracy.

Mellotron, a model for speech synthesis developed by NVIDIA, gave Dasha a head start creating its custom neural networks for fluent systems.

“We’re always looking for better model architecture to do faster inference and speech synthesis, and Mellotron is superior to other alternatives,” he said.

Now, Chernyshov is looking for a few ninja programmers in a handful of industries he wants represented in the beta program for Dasha. “We want to make sure every sector gets a voice,” he quipped.

The post Now Hear This: Startup Gives Businesses a New Voice appeared first on The Official NVIDIA Blog.

Read More

Art and Music in Light of AI

In the sea of virtual exhibitions that have popped up over the last year, the NVIDIA AI Art Gallery offers a fresh combination of incredible visual art, musical experiences and poetry, highlighting the narrative of an emerging art form based on AI technology.

The online exhibit — part of NVIDIA’s GTC event — will feature 13 standout artists from around the world who are pioneering the use of AI within their respective fields in music, visual arts and poetry.

The exhibit complements what has become the world’s premier AI conference. GTC, running April 12-16, bringing together researchers from industry and academia, startups and Fortune 500 companies.

A Uniquely Immersive Experience

Unlike other virtual galleries that only depict the artist’s final piece, the AI Art Gallery is a uniquely immersive experience.  Visitors can explore each artist’s creative process, the technologies used to bring their creations to life, and the finished works that shine new light on AI’s potential.

In addition to artists from last year’s AI Art Gallery, including Daniel Ambrosi, Helena Sarin, Pindar Van Arman, Refik Anadol, Scott Eaton, and Sofia Crespo and Entangled Others, next month’s exhibition features  these prominent artists:

  • AIVA (music) – Pierre Barreau and Denis Shtefan, co-founders of Paris-based AI music startup AIVA, combine their computer science and musical composition training to generate personalized soundtracks using AI.
  • Allison Parrish (poet) – Parrish is a computer programmer, poet and educator whose “poetry bots,” sit at the intersection of creativity, language and AI.
  • Dadabots + Keyon Christ (music) – Music and hacker duo CJ Carr and Zack Zuckowski utilize deep learning algorithms based on large datasets of recorded music to generate expressions of sound that have never existed, yet imply the sounds of soul music. Artist and producer Keyon Christ, formerly known as Mitus, rings his instantly recognizable sound to the tracks generated by Dadabots’ AI.
  • 64/1 + Harshit Agrawal (visual art) – Brothers Karthik Kalyanaraman and Raghava KK joined with Bangalore-based Harshit Agrawal to create artwork that combines their backgrounds in social studies, art, and emerging technologies. Their project, Strange Genders work, uses AI to understand and showcase how people of India represent gender visually.
  • Holly Herndon (music) – Herdon and Matt Dryhurst used deep learning to develop a voice synthesizer, combining a large corpora of voices — an entirely new approach in composition.
  • Nao Tokui + Qosmo (music) – In their project, “Neural Beatbox” Japanese artists Tokui and Qosmo use AI’s somewhat unpredictable behaviors to invite visitors to think and create outside of the box.
  • Stephanie Dinkins (visual art) – Best described as a transmedia artist, Dinkins creates experiences that spark dialogue about race, gender, aging, and future histories.

Attendees of GTC, which has free registration, will have the opportunity to interact with the artists in panel discussions and workshops. Session highlights include:

Panels and Talks

  • AI Representing Natural Forms – April 14, 11 a.m. PT
    • Join a discussion with artists Sofia Crespo, Feilican McCormick, Anna Ridler and Daniel Ambrosi to explore how they use AI in their creative process of generating interpretations of natural forms. NVIDIA Technical Specialist Chris Hebert moderates.*
  • Art in Light of AI – April 13, 8 a.m. PT
    • In a discussion led by media artist Refik Anadol, a panel of artists from around the globe, including Harshit Agrawal, Scott Eaton and Helena Sarin, will compare how they combine their fine art backgrounds with their futuristic art practices.
  • DADABOTS: Artisanal Machine Funk, Generative Black Metal – April 12, 10 a.m. PT
    • Moderated by NVIDIA Senior Engineer Omer Shapira, this panel will feature creators and users of electronic instruments that rely on deep learning for their performance — whether to create entirely new sounds and music from existing recordings or to give the music playing a human form.
  • Using AI to Shape the Language of a Generation – April 12, 12 p.m. PT
    • Join a discussion of how language in the age of AI takes on new forms and tells new stories with Allison Parrish, Stephanie Dinkins and Pindar Van Arman, moderated by NVIDIA’s Heather Schoell.

Workshops

  • Beatbox with Nao Tokui – April 15, 5 p.m. PT
    • Nao Tokui, an artist and researcher based in Japan, will lead a beatbox-making workshop using his web-based app, Neural Beatbox.
  • Music-Making with AIVA – April 14, 9 a.m. PT
    • Join the team from AIVA, who’ll lead a music-making workshop using their web-based music app.

Register to join us at GTC April 12-16, and enjoy the AI Art Gallery and related sessions.

The post Art and Music in Light of AI appeared first on The Official NVIDIA Blog.

Read More

Drum Roll, Please: AI Startup Sunhouse Founder Tlacael Esparza Finds His Rhythm

Drawing on his trifecta of degrees in math, music and music technology, Tlacael Esparza, co-founder and CTO of Sunhouse, is revolutionizing electronic drumming.

Esparza has created Sensory Percussion, a combination of hardware and software that uses sensors and AI to allow a single drum to produce a complex range of sounds depending on where and how the musician hits it.

In the latest installment of the NVIDIA AI Podcast, Esparza spoke with host Noah Kravitz about the tech behind the tool, and what inspired him to create Sunhouse. Esparza has been doing drumstick tricks of his own for many years — prior to founding Sunhouse, he toured with a variety of bands and recorded drums for many albums.

Esparza’s musical skill and programming knowledge formed the basis for Sensory Percussion. Partnering with his brother, Tenoch, and with support from a New York University startup accelerator, Sunhouse was born in 2014.

Since then, it’s become successful with live performers. Esparza is especially proud of its popularity in the New York jazz community and among drumming legends like Marcus Gilmore and Wilco’s Glenn Kotche.

Esparza and Sunhouse customers will be marching to the beat of his drum far into the future — he hints at more musical tech to come.

Key Points From This Episode:

  • Esparza was exposed to the idea of applying deep learning techniques to audio processing while earning his master’s at NYU. He studied under Juan Bello, who is responsible for much of the foundational work on music information retrieval techniques, and audited courses from AI pioneer Yann LeCun.
  • One of Esparza’s goals with Sensory Percussion was to bridge the gap between engineers and musicians. He points out that software is often extremely powerful but complex or easy to use but limited. Sunhouse technology is designed to be an accessible intermediary.

Tweetables:

“It’s about capturing that information and allowing you to use it to translate all that stuff into this electronic realm” — Tlacael Esparza [6:20]

“[Sensory Percussion] ended up actually getting utilized by musicians as more of a full-on composition tool to write and perform entire pieces of music.” — Tlacael Esparza [24:10]

You Might Also Like:

Pierre Barreau Explains How Aiva Uses Deep Learning to Make Music

AI systems have been trained to take photos and transform them into the style of great artists, but now they’re learning about music. Pierre Barreau, head of Luxembourg-based startup Aiva Technologies, talks about the soaring music composed by an AI system and featured on this podcast.

How SoundHound Uses AI to Bring Voice and Music Recognition to Any Platform

SoundHound has leveraged its decade of experience in data analytics to create a voice recognition tool that companies can bake into any product. Mike Zagorsek, SoundHound’s vice president of product marketing, talks about how the company has grown into a major player in voice-driven AI.

Pod Squad: Descript Uses AI to Make Managing Podcasts Quicker, Easier

Serial entrepreneur Andrew Mason is making podcast editing easier and more collaborative with his company, Descript Podcast Studio, which uses AI, natural language processing and automatic speech synthesis.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Drum Roll, Please: AI Startup Sunhouse Founder Tlacael Esparza Finds His Rhythm appeared first on The Official NVIDIA Blog.

Read More

Shackling Jitter and Perfecting Ping, How to Reduce Latency in Cloud Gaming

Looking to improve your cloud gaming experience? First, become a master of your network.

Twitch-class champions in cloud gaming shred Wi-Fi and broadband waves. They cultivate good ping to defeat two enemies — latency and jitter.

What Is Latency? 

Latency or lag is a delay in getting data from the device in your hands to a computer in the cloud and back again.

What Is Jitter?

Jitter is the annoying disruption you feel that leads to yelling at your router, (“You’re breaking up, again!”) when pieces of that data (called packets) get sidetracked.

Why Are Latency and Jitter Important?

Cloud gaming works by rendering game images on a custom GFN server that may be miles away, those images are then sent to a game server, and finally appear on the device in front of you.

When you fire at an enemy, your device sends data packets to these servers. The kill happens on the game on those servers, sending back commands that display the win on your screen.

And it all happens in less than the blink of an eye. In technical terms, it’s measured in “ping.”

What Is Ping?

Ping in is the time in milliseconds it takes a data packet to go to the server and back.

Anyone with the right tools and a little research can prune their ping down to less than 30 milliseconds. Novices can get pummeled with as much as 300 milliseconds of lag. It’s the difference between getting or being the game-winning kill.

Before we describe the ways to crush latency and jitter, let’s take its measure.

To Measure Latency, Test Your Ping

Speedtest.net and speedsmart.net are easy to take, but they only measure the latency to a generic server that may be in your network.

It doesn’t measure the time it takes to get data to and from the server you’re connecting to for your cloud gaming session. For a more accurate gauge of your ping, some cloud gaming services, such as NVIDIA GeForce NOW, sport their own built-in test for network latency. Those network tests will measure the ping time to and the respective cloud gaming server.

Blazing Wi-Fi and broadband can give your ping some zing.

My Speedtest result showed a blazing 10 milliseconds, while Speedsmart measured a respectable 23ms.

Your mileage may vary, too. If it sticks its head much up over 30ms, the first thing to do is check your ISP or network connection. Still having trouble? Try rebooting. Turn your device and your Wi-Fi network off for 10 seconds, then turn them back on and run the tests again.

How to Reduce Latency and Ping

If the lag remains, more can be done for little or no cost, and new capabilities are coming down the pike that will make things even better.

First, try to get off Wi-Fi. Simply running a standard Ethernet cable from your Wi-Fi router to your device can slash latency big time.

If you can’t do that, there are still plenty of ways to tune your Wi-Fi.

Ethernet is faster, but research from CableLabs shows most gamers use Wi-Fi.

Your ping may be stuck in heavy traffic. Turn off anything else running on your Wi-Fi network, especially streaming videos, work VPNs — hey, it’s time for play — and anyone trying to download the Smithsonian archives.

A High Five Can Avoid Interference

If rush-hour traffic is unavoidable on your home network, you can still create your own diamond lane.

Unless you have an ancient Wi-Fi access point (AP, for short — often referred to as a router) suitable for display in a museum, it should support both 2.4- and 5GHz channels.

You can claw back many milliseconds of latency if you shunt most of your devices to the 2.4GHz band and save 5GHz for cloud gaming.

Apps called Wi-Fi analyzers on the Android and iTunes stores can even determine which slices of your Wi-Fi airspace are more or less crowded. A nice fat 80MHz channel in the 5GHz band without much going on nearby is an ideal runway for cloud gaming.

Quash Latency with QoS

If it’s less than a decade old, your AP probably has something called quality of service, or QoS.

QoS can give some apps higher priority than others. APs vary widely in how they implement QoS, but it’s worth checking to see if your network can be set to give priority to cloud gaming.

NVIDIA provides a list of the recommended AP it’s tested with GeForce NOW as well as a support page for how to apply QoS and other techniques.

Take a Position Against Latency

If latency persists, see if you can get physically closer to your AP. Can you move to the same room?

If not, consider buying a mesh network. That’s a collection of APs you can string around your home, typically with Ethernet cables, so you have an AP in every room where you use Wi-Fi.

Some folks suggest trying different positions for your router and your device to get the sweetest reception. But others say this will only shave a millisecond or so off your lag at best, so don’t waste too much time playing Wi-Fi yoga.

Stay Tuned for Better Ping

The good news is more help is on the way. The latest version, Wi-Fi 6, borrows a connection technology from cellular networks (called OFDMA) that reduces signal interference significantly, reducing latency.

So, if you can afford it, get a Wi-Fi 6 AP, but you’ll have to buy a gaming device that supports Wi-Fi 6, too.

Next year, Wi-Fi 6E devices should be available. They’ll sport a new 6MHz Wi-Fi band where you can find fresh channels for gaming.

Coping with Internet Latency

Your broadband connection is the other part of the net you need to set up for cloud gaming. First, make sure your internet plan matches the kind of cloud gaming you want to play.

These days a basic plan tops out at about 15 Mbits/second. That’s enough if your screen can display 1280×720 pixels, aka standard high definition or 720p. If you want a smoother, more responsive experience, step up to 1080p resolution, full high def or even 4K ultra-high def — this requires at least 25 Mbits/s. More is always better.

If you’re playing on a smartphone, 5G cellular services are typically the fastest links, but in some areas a 4G LTE service may be well optimized for gaming. It’s worth checking the options with your cellular provider.

When logging into your cloud gaming service, choosing the closest server can make a world of difference.

For example, using the speedsmart.net test gave me a ping of 29ms from a server in San Francisco, 41 miles away. Choosing a server in Atlanta, more than 2,000 miles away, bloated my ping to 80ms. And forget about even trying to play on a server on another continent.

GeForce NOW members can sit back and relax on this one. The service automatically picks the fastest server for you, even if the server is a bit farther away.

A Broadband Horizon for Cloud Gaming

Internet providers want to tame the latency and jitter in their broadband networks, too.

Cable operators plan to upgrade their software, called DOCSIS 3.1, to create low-latency paths for cloud gamers. A version of the software, called L4S, for other kinds of internet access providers and Wi-Fi APs is also in the works.

Broadband latency should shrink to a fraction of its size (from blue to red in the chart) once low-latency DOCSIS 3.1 software is available and supported.

The approach requires some work on the part of application developers and cloud gaming services. But the good news is engineers and developers across all these companies are engaged in the effort and it promises dramatic reductions in latency and jitter, so stay tuned.

Now you know the basics of navigating network latency to become a champion cloud gamer, so go hit “play.”

Follow GeForce NOW on Facebook and Twitter and stay up to date on the latest features and game launches. 

The post Shackling Jitter and Perfecting Ping, How to Reduce Latency in Cloud Gaming appeared first on The Official NVIDIA Blog.

Read More

Come Sale Away with GFN Thursday

GFN Thursday means more games for GeForce NOW members, every single week.

This week’s list includes the day-and-date release of Spacebase Startopia, but first we want to share the scoop on some fantastic sales available across our digital game store partners that members will want to take advantage of this very moment.

Discounts for All

GeForce NOW is custom-built for PC gamers, and our open platform philosophy is a hallmark of PC gaming. Gamers are used to buying their games from whichever digital store they choose, and often jump between them during big sales.

That’s why we support multiple digital game stores, as well as the games already in your libraries. Why lock you down if there are great deals to be had?

When supported games go on sale, members can purchase and instantly play from GeForce NOW’s cloud gaming servers. Plus, they know that they’re adding the real PC version to their gaming library for playing on their local machines whenever they like.

Your Base, Your Rules

Spacebase Startopia on GeForce NOW
Keeping your inhabitants happy is half the battle in Spacebase Startopia, joining GeForce NOW on March 26.

This open platform philosophy is one of the key reasons why Kalypso made its games available on GeForce NOW. The publisher, known for hit strategy sims like the Tropico series, understands that bringing its games to GeForce NOW means another easy way to welcome new gamers, and keep them playing.

“We want gamers to be able to play our games as easily as possible,” says Marco Nier, international marketing manager at Kalypso. “If they can play on their local machine? Great! If they can use GeForce NOW on their laptop, or Mac, or phone? Even better.”

Kalypso’s newest game, Spacebase Startopia, joins the GeForce NOW library when it releases tomorrow, March 26. Developed by Realmforge Studios, the game challenges you to manage and maintain your own space station, and mixes economic simulation, strategic conquest and real-time strategy gameplay with more than a dash of humor. It’s a game we’ve been excited about for a while, and members will be able to stream every moment across all of their devices.

Spacebase Startopia on GeForce NOW
You decide how to renovate your space station, in hopes it becomes a huge interstellar hub for alien visitors.

Additionally, to celebrate the Spacebase Startopia launch, Kalypso has put some of its greatest GeForce NOW-enabled games on sale on Steam.

  • Commandos 2 – HD Remaster – 40 percent off until March 29 (Steam)
  • Immortal Realms: Vampire Wars – 50 percent off until March 29 (Steam)
  • Praetorians – HD Remaster – 40 percent off until March 29 (Steam)

More Savings

If you’re still figuring out how to fill your weekend, look no further. We’re thrilled to share additional deals our members can take advantage of:

Ubisoft

Square Enix

  • Just Cause 3 – 85 percent off until March 29 (Steam)
  • Just Cause 4: Reloaded – 80 percent off until March 29 (Steam)
  • Rise of the Tomb Raider: 20 Year Celebration – 80 percent off until March 29 (Steam)
  • Shadow of the Tomb Raider: Definitive Edition – 75 percent off until March 29 (Steam)

Deep Silver

  • Gods Will Fall – 20 percent off until April 8 (Epic Games Store)
  • Metro Exodus: Standard Edition – 66 percent off until April 8 (Epic Games Store)
  • Outward – 70 percent off until March 29 (Steam)

Other GFN Thursday Favorites

  • Car Mechanic Simulator 2018 – 60 percent off until April 2 (Steam)
  • Farm Manager 2018 – 90 percent off until April 2 (Steam)
  • Lonely Mountains: Downhill – 33 percent off until March 31 (Steam)
  • Superhot – 60 percent off until March 29 (Steam)
  • Superhot: Mind Control Delete – 60 percent off until March 29 (Steam)
  • Thief Simulator – 62 percent off until April 2 (Steam)
  • Tower of Time – 75 percent off until April 1 (Steam)

Finding offers like these and more is never out of reach. Be sure to check out the Sales and Special Offers row in the GeForce NOW app.

Play Overcooked! All you can Eat! on GeForce NOW
Overcooked! All You Can Eat! is one of 12 games joining the GeForce NOW library this week.

Let’s Play

If all that wasn’t enough new gaming goodness, don’t forget it’s still GFN Thursday, and that means more additions to the GeForce NOW library. Members can look for the following:

  • Spacebase Startopia (day-and-date release on Steam and Epic Games Store, March 26)
  • Overcooked! All You Can Eat! (day-and-date release on Steam, March 23)
  • Paradise Lost (day-and-date release on Steam, March 24)
  • Door Kickers (Steam)
  • Evoland Legendary Edition (Steam)
  • Iron Conflict (Steam)
  • Railroad Corporation (Steam)
  • Sword and Fairy 7 Trial (Steam)
  • Thief Gold (Steam)
  • Trackmania United Forever (Steam)
  • Worms Reloaded (Steam)
  • Wrench (Steam)

What are you planning to play this weekend? Let us know on Twitter or in the comments below.

The post Come Sale Away with GFN Thursday appeared first on The Official NVIDIA Blog.

Read More

Sweden’s AI Catalyst: 300-Petaflops Supercomputer Fuels Nordic Research

A Swedish physician who helped pioneer chemistry 200 years ago just got another opportunity to innovate.

A supercomputer officially christened in honor of Jöns Jacob Berzelius aims to establish AI as a core technology of the next century.

Berzelius (pronounced behr-zeh-LEE-us) invented chemistry’s shorthand (think H20) and discovered a handful of elements including silicon. A 300-petaflops system now stands on the Linköping University (LiU) campus, less than 70 kilometers from his birthplace in south-central Sweden, like a living silicon tribute to innovations yet to come.

“Many cities in Sweden have a square or street that bears Berzelius’s name, but the average person probably doesn’t know much about him,” said Niclas Andersson, technical director at the National Supercomputer Centre (NSC) at Linköping University, which is home to the system based on the NVIDIA DGX SuperPOD.

The BerzeLiUs system will be twice as fast as Sweden’s current top computer and would rank among the top 10 percent on the latest list of the world’s TOP500 supercomputers.

An Ambitious AI Agenda

Andersson and others hope many people will feel ripples from the ambitious work planned for the BerzeLiUs supercomputer. For starters, it may initially tackle as many as seven two-year projects that aim to make leaps forward in areas like wireless communications, cybersecurity, large-scale IoT and efficient programming.

BerzeLiUs supercomputer in Sweden e
Above and at top: The BerzeLiUs system. Pictures: Thor Balkhed, Linkoping University.

In addition, Swedish researchers can use the system to collaborate with their longstanding research colleagues at Singapore’s Nanyang Technical University (NTU) on six new efforts. They include finding new ways to enhance data analytics with visualization, developing more secure AI algorithms and orchestrating multiple AI models to work as one to schedule the NTU’s campus bus network.

It’s all part of Sweden’s largest private research initiative focused on AI innovation. Known simply as WASP, the Wallenberg Artificial Intelligence, Autonomous Systems and Software Program is a 15-year effort that’s already recruited a faculty of more than three dozen international researchers and engaged 40 companies.

Backing Science for a Better World

The effort is spearheaded by the Knut and Alice Wallenberg Foundation, Sweden’s largest private financier of research. It contributed most of the 5.5 billion kronor ($650 million) for WASP. In a separate gift, the foundation donated 300 million kronor ($36 million) last October to Linköping University to build and run the BerzeLiUs supercomputer for WASP and other researchers.

WASP hosted a virtual event in December that attracted speakers from Ericsson, IKEA, Volvo and SEB Group, one of Sweden’s largest banks. At the event, the foundation’s vice chair, Marcus Wallenberg — scion of the country’s best-known family of industrialists — noted the collaboration between academia and industry is vital for research that makes a positive impact on society. In addition, Wallenberg spoke with NVIDIA CEO Jensen Huang at the inauguration of the BerzeLiUs system (see video below).

There’s little doubt that, like early advances in chemistry, AI is becoming part of our daily lives.

“AI will be everywhere, it will get into many places we can’t imagine at this point,” said Andersson, whose 25-year career at NSC has been devoted to work in high performance computing.

The New Formula: HPC+AI

“HPC traditionally is mostly about simulation; and now, with the advent of AI, simulation becomes an input to new kinds of data analytics that will drive computing to much wider use — it’s a big trend shift,” he said.

Specifically, the BerzeLiUs system will help researchers scale their work to handle the larger datasets and models that power discoveries with AI.

Jöns Jacob Berzelius, Swedish chemistry pioneer
Jöns Jacob Berzelius

“Most people have been working with single machines that are not as powerful as DGX systems, so our most important task in the coming years is to help develop algorithms that can scale; and there are very large problems that can use many GPU nodes,” he said.

Inside the New Machine

The BerzeLiUs system consists of 60 NVIDIA DGX A100 systems, linked on a 200 Gbit/second NVIDIA Mellanox InfiniBand HDR network. The same network links the processors to 1.5 petabytes of flash memory on four storage servers from DataDirect Networks.

The single InfiniBand network ensures data gets into the system fast, and AI is all about data. “Buying more storage might be the first upgrade we will need because 60 DGX A100 systems can suck up a lot of data fast,” Andersson said.

To get started quickly, NSC asked NVIDIA and Atos, who managed the build and integration, to configure the system’s software. The stack will include the Atos Codex AI Suite as well as access to NGC, NVIDIA’s hub for GPU-optimized software for AI and HPC.

In the end, all the bits and bytes come down to people who, like Berzelius, create advances that drive society forward.

“We need many new explorers in the universe of AI because it will infiltrate and transform all disciplines of research,” said Andersson.

 

 

The post Sweden’s AI Catalyst: 300-Petaflops Supercomputer Fuels Nordic Research appeared first on The Official NVIDIA Blog.

Read More

GeForce NOW Gets New Priority Memberships and More

As GeForce NOW enters its second year and rapidly approaches 10 million members, we’re setting our sights on fresh milestones and adding new membership offerings.

First up is a new premium offering, Priority membership, which receives the same benefits as Founders members. These include priority access to gaming sessions, extended session lengths and RTX ON for beautifully ray-traced graphics and DLSS in supported games.

Monthly memberships are available at $9.99 a month. A new annual membership option, at $99.99, provides the best value.

GeForce NOW wouldn’t be what it is today without our Founders members. That’s why we’re adding the Founders for Life benefit, which continues the special $4.99 introductory rate as long the account is in good standing. (For more details, click here.)

Whether it’s a short time, a long time or a lifetime, we want our Founders members, who’ve supported GeForce NOW from the start, to stay a part of the family.

The Next Level

We’re grateful for the feedback we receive from all our members. We use it to improve the service and plan for the future. In the last year, that included:

  • Supporting day-and-date launches, like Cyberpunk 2077, and top free-to-play games
  • Increasing the game library, which now includes more than 800 games
  • Opening new data centers, giving us more than 20 around the world
  • Expanding capacity in existing data centers, allowing us to grow the service further
  • Improving quality of service, with even more optimizations on the way.

As GeForce NOW continues to grow, we’re working to improve streaming quality and ease of use, and to add more games.

With the 2.0.28 update, currently rolling out to members and available to all in about a week, streaming quality takes the next step.

One of the ways we do this is with a unique adaptive Vsync technology. The feature synchronizes frame rates at 60 or 59.94 Hz server-side to match the display client-side, reducing stutter and latency on supported games. A new adaptive de-jitter technology will enable us to increase bit rates for improved quality over choppy networks, too.

We’re also passionate about making GeForce NOW easier to use, including updates that get members into the game even faster. The first phase of these improvements includes account linking for key games on the platform, coming in the next 1-2 months, as well as updates to game preloading that should cut load times by about half.

Additionally, we’re adding capacity in our busiest data centers along with new server locations. Next up is Phoenix, Arizona, with our first Canadian data center, in Montreal, to follow. We expect both to be operational later this year, helping reduce wait times for Priority and Founders members.

GeForce NOW Data Centers
We’re bringing new and expanded data centers online for GeForce NOW.

We also get tons of questions from our community about bringing GeForce NOW to new regions. We just launched with a new partner in Turkey, with Saudia Arabia and Australia on the horizon.

GeForce NOW Alliance partners operate regional data centers and offer GeForce NOW in local currencies and local language support. That means reduced latency, significantly better ping times and less waiting. Expect additional GeForce NOW Alliance partners expanding into new territories soon.

Speedrunning Your GFN Thursday

Finally, we’re upgrading GFN Thursday, our ongoing commitment to bringing great PC games and service updates to members each week.

In 2020, we onboarded and released an average of 10 games a week. With a new express onboarding pipeline, our goal is to increase that by about half by year-end.

We’ve continued to add support for additional digital game stores. In case you missed it, last week we released GOG.COM versions of four CD PROJEKT RED games, including The Witcher 3: Wild Hunt Game of the Year Edition.

And since it’s Thursday, we can’t leave without giving our members their weekly dose of new games.

System Shock: Enhanced Edition on GeForce NOW
System Shock: Enhanced Edition (Steam) and more are joining the GeForce NOW library this week.

In addition to the service updates rolling out with this week’s 2.0.28 release, we’re streaming 7 new games on GeForce NOW. Members can look for:

New memberships, improved streaming, more games — GeForce NOW year two is just getting started.

The post GeForce NOW Gets New Priority Memberships and More appeared first on The Official NVIDIA Blog.

Read More

Racing Ahead, Predator Cycling Speeds Design and Development of Custom Bikes with Real-Time Rendering

The world of bicycle racing has changed. Aggressive cyclists expect their bikes to meet their every need, no matter how detailed and precise. And meeting these needs requires an entirely new approach.

Predator Cycling engineers and manufactures high-end custom-built carbon fiber bicycles that have garnered praise from championship cyclists around the world. For the past 15 years, the Predator team has designed every aspect of their frames and conducted intensive engineering simulation, rendering and manufacturing processes in-house.

“I had been following Predator for over a decade, and knew there was only one guy, Aram, who could take my raw idea, refine it, and turn it into a finished product worthy of use on track cycling’s biggest stage,” said Olympic cyclist Bobby Lea.

Recently, Predator Cycling worked on its most innovative project to date — the new RF20 frame. With increasing costs of materials and the complexity of production requirements, the RF20 spent extended time in research and development.

To bring the RF20 to market at a competitive price and minimize the build time of each bicycle, Predator Cycling knew it needed to maximize performance and efficiency. The team found the exact solution it needed with the Lenovo ThinkStation P620, powered by the NVIDIA RTX A6000 GPU.

The RTX A6000 enables Predator Cycling to process more complex models, and render and run engineering simulations in real time. With the ThinkStation P620, the team can easily handle real-time computing and multitasking, allowing them to accelerate production workflows.

RTX Hammers to the Front in Every Stage 

Since using the RTX A6000-powered ThinkStation P620, Predator Cycling has streamlined their manufacturing processes significantly. This allowed them to speed up production workflows and bring the RF20 frame out of R&D and get it out on the road. Overall, the company estimates it has shrunk its go-to-market timelines by 12-16 weeks.

“The NVIDIA RTX A6000 GPU and ThinkStation P620 deliver cutting-edge performance and speed to accelerate design processes and production times,” said Aram Goganian, co-founder of Predator Cycling. “We’re able to do complex wind drag simulations, mechanical and structural testing, and topology optimizations with AI in near real time, enabling us to show customers design changes with minimal delay.”

And it’s not just projects like the RF20 frame that have been significantly accelerated — Predator Cycling’s relationships and interactions with customers have also been enhanced. 

Each of Predator Cycling’s bikes is custom-built, so customers need real-life representations to help them select the components and finishes they want. Previously, this meant building physical prototypes that took months to complete.

Now, the powerful RTX A6000 has allowed the Predator team to provide customers with realistic bike renders long before they create physical prototypes. Clients can provide instant feedback on the photorealistic renders, which allows Predator Cycling to skip continuous iteration cycles and go from prototyping to testing fast.

Predator Cycling produces photorealistic bike renders in real time, saving weeks in their go-to-market timeline from producing physical prototypes. Image courtesy of Predator Cycling.

Predator Cycling has also notably improved internal workflows when it comes to running simulations, and they’ve seen performance gains of 2-6x across a number of applications such as Luxion KeyShot, ANSYS Discovery, ANSYS Fluent and Autodesk Fusion 360.

Discover More Breakthroughs at GTC

Learn about more advanced technologies in manufacturing and product design at the GPU Technology Conference, running from April 12-16. Predator Cycling (S31226) will be at GTC to share more details about their 3D print production experience with NVIDIA technologies.

Check out some featured GTC sessions by customers and partners below:

  • Rimac Automobili (E31424) will share their experience in accelerating the design of the world’s fastest electric car.
  • Polaris (S31512) will discuss locally streaming VR content for automotive design.
  • nTopology (S32033) will present a new approach to mechanical engineering with real-time feedback, thanks to GPU acceleration.
  • BMW (S31367) will share a simulation-first approach with leveraging robots for inspecting BMW vehicles.
  • AWS, Dassault Systèmes, NVIDIA and Renault Group (E31274) will discuss balancing peak rendering needs in the cloud and accurately simulating vehicle designs using a cluster of 4,000 GPUs.

And don’t miss our special keynote address on April 12, when NVIDIA CEO Jensen Huang will share exciting news and announcements.

Register now for free and explore other manufacturing sessions at GTC.

The post Racing Ahead, Predator Cycling Speeds Design and Development of Custom Bikes with Real-Time Rendering appeared first on The Official NVIDIA Blog.

Read More