2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary

Great things come in twos. Techland’s Dying Light 2 Stay Human arrives with RTX ON and is streaming from the cloud tomorrow, Feb. 4.

Plus, in celebration of the second anniversary of GeForce NOW, February is packed full of membership rewards in Eternal Return, World of Warships and more. There are also 30 games joining the GeForce NOW library this month, with four streaming this week.

Take a Bite Out of Dying Light 2

It’s the end of the world. Get ready for Techland’s Dying Light 2 Stay Human, releasing on Feb. 4 on GeForce NOW with RTX ON.

Civilization has fallen to the virus, and The City, one of the last large human sanctuaries, is torn by conflict in a world ravaged by the infected dead. You are a wanderer with the power to determine the fate of The City in your search to learn the truth.

Struggle to survive by using clever thinking, daring parkour, resourceful traps and creative weapons to make helpful allies and overcome enemies with brutal combat. Discover the secrets of a world thrown into darkness and make decisions that determine your destiny. There’s one thing you can never forget — stay human.

We’ve teamed up with Techland to bring real-time ray tracing to Dying Light 2 Stay Human, delivering the highest level of realistic lighting and depth into every scene of the new game.

Dying Light 2 RTX ON on GeForce NOW
Hold on to your humanity and charge into ‘Dying Light 2’ with RTX ON.

Dying Light 2 Stay Human features a dynamic day-night cycle that is enhanced with the power of ray tracing, including fully ray-traced lighting throughout. Ray-traced global illumination, reflections and shadows bring the world to life with significantly better diffused lighting and improved shadows, as well as more accurate and crisp reflections.

“We’ve been working with NVIDIA to expand the world of Dying Light 2 with ray-tracing technology so that players can experience our newest game with unprecedented image quality and more immersion than ever,” said Tomasz Szałkowski, rendering director at Techland. “Now, gamers can play Dying Light 2 Stay Human streaming on GeForce NOW to enjoy our game in the best way possible and exactly as intended with the RTX 3080 membership, even when playing on underpowered devices.”

Enter the dark ages and stream Dying Light 2 Stay Human from both Steam and Epic Games Store on GeForce NOW at launch tomorrow.

An Anniversary Full of Rewards

Happy anniversary, members. We’re celebrating GeForce NOW turning two with three rewards for members — including in-game content for Eternal Return, World of Warships.

GeForce NOW World of Warship Reward
All aboard! We’re setting sail towards great rewards this month for titles like World of Warships.

Check in on GFN Thursdays throughout February for updates on the upcoming rewards, and make sure you’re opted in by checking the box for Rewards in the GeForce NOW account portal.

Celebrating the Second Anniversary of GeForce NOW
Thanks for two years of great gaming on the cloud!

Thanks to the cloud, this past year, over 15 million members around the world have streamed nearly 180 million hours of their favorite games like Apex Legends, Rust and more from the ever-growing GeForce NOW library.

Members have played with the power of the new six-month RTX 3080 memberships, delivering cinematic graphics with RTX ON in supported games like Cyberpunk 2077 and Control. They’ve also experienced gaming with ultra-low latency and maximized eight-hour session lengths across their devices.

It’s All Fun and Games in February

The party doesn’t stop there. It’s also the first GFN Thursday of the month, which means a whole month packed full of games.

New games in February on GeForce NOW
There’s a whole lot of new games coming to the cloud in February.

Gear up for the 30 new titles coming to the cloud in February, with four games ready to stream this week:

  • Life is Strange Remastered (New release on Steam, Feb. 1)
  • Life is Strange: Before the Storm Remastered (New release on Steam, Feb. 1)
  • Dying Light 2 Stay Human (New release on Steam and Epic Games Store, Feb. 4)
  • Warm Snow (Steam)

Also coming in February:

  • Werewolf: The Apocalypse – Earthblood (New release on Steam Feb. 7)
  • Sifu (New release on Epic Games Store, Feb. 8)
  • Diplomacy is Not an Option (New release on Steam, Feb. 9)
  • SpellMaster: The Saga (New release on Steam, Feb. 16)
  • Destiny 2: The Witch Queen Deluxe Edition (New release on Steam, Feb. 22)
  • SCP: Pandemic (New release on Steam, Feb. 22)
  • Martha is Dead (New release on Steam and Epic Games Store, Feb. 24)
  • Ashes of the Singularity: Escalation (Steam)
  • AWAY: The Survival Series (Epic Games Store)
  • Citadel: Forged With Fire (Steam)
  • Escape Simulator (Steam)
  • Galactic Civilizations III (Steam)
  • Haven (Steam)
  • Labyrinthine Dreams (Steam)
  • March of Empires (Steam)
  • Modern Combat 5 (Steam)
  • Parkasaurus (Steam)
  • People Playground (Steam)
  • Police Simulator: Patrol Officers (Steam)
  • Sins of a Solar Empire: Rebellion (Steam)
  • Train Valley 2 (Steam)
  • TROUBLESHOOTER: Abandoned Children (Steam)
  • Truberbrook (Steam and Epic Games Store)
  • Two Worlds Epic Edition (Steam)
  • Valley (Steam)
  • The Vanishing of Ethan Carter (Epic Games Store)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Extra Games From January

As the cherry on top of the games announced in January, an extra nine made it to the cloud. Don’t miss any of these titles that snuck their way onto GeForce NOW last month:

With another anniversary for the cloud and all of these updates, there’s never too much fun. Share your favorite GeForce NOW memories from the past year and talk to us on Twitter.

The post 2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary appeared first on The Official NVIDIA Blog.

Read More

Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief

Flooding usually comes with various bad weather conditions, such as thick clouds, heavy rain and blustering winds.

GPU-powered data science systems can now help researchers and emergency flood response teams to see through it all.

John Murray, visiting professor in the Geographic Data Science Lab at the University of Liverpool, developed cuSAR, a platform that can monitor ground conditions using radar data from the European Space Agency.

cuSAR uses the satellite data to create images that portray accurate geographic information about what’s happening on the ground, below the bad weather conditions.

To create the radar vision platform, Murray used the NVIDIA RAPIDS suite of software libraries and the CUDA parallel computing platform, as well as NVIDIA GPUs.

Emergency Flood Response

The platform was originally designed for the property insurance sector, as mortgage and insurance providers need to assess risks that affect properties, including flooding.

Using satellite data in this way requires clear visuals of the ground, but obtaining analyzable images meant potentially waiting weeks for breaks in Britain’s infamous cloud cover. With cuSAR, users can gain insights in near real time.

Use cases for the radar vision platform have now expanded to the safety sector.

The North Wales Regional Emergency Planning Service first contacted the Geographic Data Science Lab for help with serious flooding that occurred in the Dee Valley a couple years ago. Low, dense clouds hung over the valley, making it impossible for the team to fly helicopters. And drones weren’t able to give a sufficient overview of how the floodplains along the river were behaving.

Using the NVIDIA GPU-powered image analysis platform, Murray was able to provide high-quality renders of the affected areas each day of the flooding. The emergency planning service used this information to allocate its limited resources to critical areas, adjusting its efforts as the flooding progressed.

Last year, the lab provided radar data to monitor a vaccine factory under threat from rising water levels. Emergency response teams were able to send helicopters, which weather conditions allowed this time, to the exact locations from which to best combat the flooding.

Correcting a Distorted View

Creating analyzable images from radar data is no simple task.

Due to the earth’s curvature, the perspective of satellite images is distorted. This distortion needs to be mathematically corrected and overlaid with location data, using a process called rubbersheeting, for precise geolocation.

A typical radar manifest contains half a billion data points, presented as a grid.

The distortion of a satellite image.
An example of the distortion from a radar image in comparison to the location it corresponds to. Courtesy of Fusion Data Science Ltd CC BY-SA 3.0.

“You can’t just take radar data and make an image from it,” said Murray. “There’s a lot of processing and math involved, and that’s where the GPUs come in.”

Murray wrote the code for cuSAR using NVIDIA RAPIDS and Python Numba CUDA, which matches the radar and location data seamlessly.

Traditional Java or Python code would usually take around 40 minutes to provide an output. Backed by an NVIDIA GPU, it takes only four seconds.

Once the data has been processed, the platform outputs an image with accurate geographic information that corresponds to Ordnance Survey grid coordinates.

Within 15 minutes of receiving the satellite data, it can be placed in the hands of the emergency relief teams, giving them the knowledge to effectively react to a rapidly evolving situation on the ground.

Flood Protection for the Future

In the last decade, the U.K. has seen several of its wettest months on record. Notably, 2020 was the first year on record that fell in the top 10 for all three key weather rankings — warmest, wettest and sunniest. The Met Office predicts that severe flash flooding could be nearly five times more likely in 50 years time.

Technology like cuSAR enables researchers and emergency responders to monitor and react to disasters in a timely manner, protecting homes and businesses that are most vulnerable to worsening weather conditions.

Learn more about technology breakthroughs at GTC, running March 21-24.  

Feature image courtesy of Copernicus Sentinel data, processed by ESA CC BY-SA 3.0.

The post Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief appeared first on The Official NVIDIA Blog.

Read More

How Audio Analytic Is Teaching Machines to Listen

From active noise cancellation to digital assistants that are always listening for your commands, audio is perhaps one of the most important but often overlooked aspects of modern technology in our daily lives.

Audio Analytic has been using machine learning that enables a vast array of devices to make sense of the world of sound.

We spoke with Dr. Chris Mitchell, CEO and founder of Audio Analytic about the challenges, and the fun, involved in teaching machines to listen.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

If your favorite isn’t listed here, drop us a note.

You Might Also Like:

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint

Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including Da Vinci’s Salvador Mundi, with AI’s help.

Researchers Chris Downum and Leszek Pawlowicz Use Deep Learning to Accelerate Archaeology

Researchers in the Department of Anthropology at Northern Arizona University are using GPU-based deep learning algorithms to categorize sherds — tiny fragments of ancient pottery.

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post How Audio Analytic Is Teaching Machines to Listen appeared first on The Official NVIDIA Blog.

Read More

Support for New NVIDIA RTX 3080 Ti, 3070 Ti Studio Laptops Now Available in February Studio Driver

Support for the new GeForce RTX 3080 Ti and 3070 Ti Laptop GPUs is available today in the February Studio driver.

Updated monthly, NVIDIA Studio drivers support NVIDIA tools and optimize the most popular creative apps, delivering added performance, reliability and speed to creative workflows.

Creatives will also benefit from the February Studio driver with enhancements to their existing creative apps as well as the latest app releases, including a major update to Maxon’s Redshift renderer.

The NVIDIA Studio platform is being rapidly adopted by aspiring artists, freelancers and creative professionals who seek to take their projects to the next level. The next generation of Studio laptops further powers their ambitions.

Creativity Unleashed — 3080 Ti and 3070 Ti Studio Laptops

Downloading the February Studio driver will help unlock massive time savings, especially for 3080 Ti and 3070 Ti GPU owners, in essential creative apps.

Blender renders are exceptionally fast on GeForce RTX 3080 Ti and 3070 Ti GPUs with RT Cores powering hardware-accelerated ray tracing.

GeForce RTX 3080 Ti GPU laptops achieve up to 10x faster rendering speeds than the MacBook Pro 16 M1 Max.

Autodesk aficionados with a GeForce RTX 3080 Ti GPU equipped laptop can render the heaviest of scenes much faster, in this example saving over an hour.

The GeForce RTX 3080 Ti laptop GPU renders up to 7x faster in Autodesk Maya than the MacBook Pro 16 M1 Max.

Video production specialists in REDCINE-X PRO have the freedom to edit in real-time with elevated FPS, resulting in more accurate playback, requiring far less time in the editing bay.

Edit RED RAW video faster with GeForce RTX 3080 Ti laptop GPU.

Creators can move at the speed of light with the 2022 lineup of Studio laptops and desktops.

MSI has announced the Creator Z16 and Creator Z17 Studio laptops, set for launch in March, with up to GeForce RTX 3080 Ti Laptop GPUs.

The MSI Z17 True Pixel display features QHD+ resolution, 100 percent DCI-P3 (typical) color gamut, factory-calibrated Delta-E < 2 out-of-the-box accuracy and True Color Technology.

ASUS’s award-winning ZenBook Pro Duo, coming later this year, sports a GeForce RTX 3060 GPU, plus a 15.6-inch 4K UHD OLED touchscreen and secondary 4K screen, unlocking numerous creative possibilities.

ASUS worked closely with third-party developers — including professional video-editing software developer Corel, with more to come — to optimize ScreenPad Plus for creative workflows and productivity.

The Razer Blade 17 and 15, available now, come fully loaded with a GeForce RTX 3080 Ti GPU and 32GB of memory — and they’re configurable with a beautiful 4K 144hz, 100-percent DCI-P3 display. Razer Blade 14 will launch on Feb. 17.

The Razer Blade 17 features a stunning 4K display and a 144Hz UHD refresh rate for creative professionals who want their visions to truly come to life.

GIGABYTE’s newly refreshed AERO 16 and 17 Studio laptops, equipped with GeForce RTX 3070 Ti and 3080 Ti GPUS, are also now available.

The AERO 17 sports a 3mm ultra-thin bezel and X-Rite Pantone-certified 4K HDR display with Adobe RGB 100 percent color gamut.

These creative machines power RTX-accelerated tools, including NVIDIA Omniverse, Canvas and Broadcast, making next-generation AI technology even more accessible while reducing and removing tedious work.

Fourth-generation Max-Q technologies — including CPU Optimizer and Rapid Core Scaling — maximize creative performance in remarkably thin laptop designs.

Stay tuned for more Studio product announcements in the coming months.

Shift to Redshift RT 

Well-known to 3D artists, Maxon’s Redshift renderer is powerful, biased and GPU accelerated — built to exceed the demands of contemporary high-end production rendering.

Redshift recently launched Redshift RT — a real-time rendering feature — in beta, allowing 3D artists to omit unnecessary wait times for renders to finalize.

Redshift RT runs exclusively on NVIDIA RTX GPUs, bolstered by RT Cores, powering hardware-accelerated, interactive ray tracing.

Redshift RT, which is part of the current release, enables a more natural, intuitive way of working. It offers increased freedom to try different options for creating spectacular content, and is best used for scene editing and rendering previews. Redshift Production remains the highest possible quality and control renderer.

Redshift RT technology is integrated in the Maxon suite of creative apps including Cinema 4D, and is available for Autodesk 3ds Max and Maya, Blender, Foundry Katana and SideFX Houdini, as well as architectural products Vectorworks, Archicad and Allplan, dramatically speeding up all types of visual workflows.

With so many options, now’s the time to take a leap into 3D. Check out our Studio YouTube channel for standouts, tutorials, tips and tricks from industry-leading artists on how to get started.

Get inspired by the latest NVIDIA Studio Standouts video featuring some of our favorite digital art from across the globe.

Follow NVIDIA Studio on Facebook, Twitter and Instagram for the latest information on creative app updates, new Studio apps, creator contests and more. Get updates directly to your inbox by subscribing to the Studio newsletter.

The post Support for New NVIDIA RTX 3080 Ti, 3070 Ti Studio Laptops Now Available in February Studio Driver appeared first on The Official NVIDIA Blog.

Read More

Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality

Interior renovations have never looked this good.

TCImage, a studio based in Taipei, is showcasing compelling landscape and architecture designs by creating realistic 3D graphics and presenting them in virtual, augmented, and mixed reality — collectively known as extended reality, or XR.

For clients to get a better understanding of the designs, TCImage produces high-quality, 3D visualizations of the projects and puts them in a virtual environment. This lets users easily review and engage with the model in full scale, so they can get to the final design faster.

To keep up with client expectations and deliver quality content, the team at TCImage needs advanced tools and technologies that help them make design concepts feel like a reality.

With NVIDIA RTX technology, CloudXR, Deep Learning Super Sampling (DLSS) and NVIDIA Omniverse, TCImage is at the forefront of delivering stunning renders and XR experiences that allow clients to be virtually transported to the renovation of their dreams.

Bringing Design Details to Life With RTX

To make the realistic details stand out in a design, its CEO Leo Chou and his team must create all 3D visuals in high resolution. During the design process, the team uses popular applications like Autodesk 3ds Max, Autodesk Revit, Trimble SketchUp and Unreal Engine 4. Chou initially tried using a consumer-level PC to render 3D graphics, but it would take up to three hours just to render a single frame of a 4K image.

Now, with an enterprise-grade PC powered by an NVIDIA RTX 6000 graphics card, he can render the same 4K frame within 30 minutes. NVIDIA RTX provides Chou with enhanced efficiency and performance, which allow him to achieve real-time rendering of final images.

“I was thrilled by the performance of RTX technology — it’s more powerful, allowing me to establish a competitive edge in the industry by making real-time ray tracing come true,” said Chou.

Looking Around Unbound With CloudXR

To show off these dazzling 3D visuals to customers, TCImage uses CloudXR.

With this extended reality streaming technology, Chou and his team can share projects inside an immersive and seamless experience, allowing them to efficiently communicate project designs to customers. The team can also present their designs from any location, as they can stream the untethered XR experiences from the cloud.

Built on RTX technology, CloudXR enables TCImage to stream high-resolution, real-time graphics and provide a more interactive experience for clients. NVIDIA DLSS also improves the XR experience by rendering more frames per second, which is especially helpful during the design review process.

With NVIDIA DLSS, TCImage can tap into the power of AI to boost frame rates and create sharp images for the XR environment. This helps the designers and clients see a preview of the 3D model with minimal latency as the user moves and rotates inside the environment.

“By using NVIDIA CloudXR, I can freely and easily present my projects, artwork and portfolio to customers anytime, anywhere while maintaining the best quality of content,” said Chou. “I can even edit the content in real time, based on the customers’ requirements.”

According to Chou, TCImage clients who have experienced the improved workflow were impressed by how much time and cost savings the new technology has provided. It’s also created more business opportunities for the firm.

Designing Buildings in Virtual Worlds

TCImage has started to explore design workflows in the virtual world with NVIDIA Omniverse, a platform for 3D simulation and design collaboration. In addition to using real-time ray tracing and DLSS in Omniverse, Chou played around with optimizing his virtual scenes with the Omniverse Create and Omniverse View applications.

“Omniverse is flexible enough to integrate with major graphics software, as well as allow instantaneous content updates and changes without any extra effort by our team,” said Chou.

In Omniverse Create, Chou can enhance creative workflows by connecting to leading applications to produce architectural designs. He also uses existing materials in Omniverse, such as grass brush samples, to create exterior landscapes and vegetation.

And with Omniverse View, Chou uses lighting tools such as Sun Study, which allows him to review designs with accurate sunlight.

Learn more about TCImage and check out Chou’s recent tutorial in Omniverse:

The post Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality appeared first on The Official NVIDIA Blog.

Read More

Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways

Preventable train accidents like the 1985 disaster outside Tel Aviv in which a train collided with a school bus, killing 19 students and several adults, motivated Shahar Hania and Elen Katz to help save lives with technology.

They founded Rail Vision, an Israeli startup that creates obstacle-detection and classification systems for the global railway industry

The systems use advanced electro-optic sensors to alert train drivers and railway control centers when a train approaches potential obstacles — like humans, vehicles, animals or other objects — in real time, and in all weather and lighting conditions.

Rail Vision is a member of NVIDIA Inception — a program designed to nurture cutting-edge startups — and an NVIDIA Metropolis partner. The company uses the NVIDIA Jetson AGX Xavier edge AI platform, which provides GPU-accelerated computing in a compact and energy-efficient module, and the NVIDIA TensorRT software development kit for high-performance deep learning inference.

Pulling the Brakes in Real Time

A train’s braking distance — or the distance a train travels between when its brakes are pulled and when it comes to a complete stop — is usually so long that by the time a driver spots a railway obstacle, it could be too late to do anything about it.

For example, the braking distance for a train traveling 100 miles per hour is 800 meters, or about a half-mile, according to Hania. Rail Vision systems can detect objects on and along tracks from up to two kilometers, or 1.25 miles, away.

By sending alerts, both visual and acoustic, of potential obstacles in real time, Rail Vision systems give drivers over 20 seconds to respond and make decisions on braking.

The systems can also be integrated with a train’s infrastructure to automatically apply brakes when an obstacle is detected, even without a driver’s cue.

“Tons of deep learning inference possibilities are made possible with NVIDIA GPU technology,” Hania said. “The main advantage of using the NVIDIA Jetson platform is that there are lots of goodies inside — compressors, modules for optical flow — that all speed up the embedding process and make our systems more accurate.”

Boosting Maintenance, in Addition to Safety

In addition to preventing accidents, Rail Vision systems help save operational time and costs spent on railway maintenance — which can be as high as $50 billion annually, according to Hania.

If a railroad accident occurs, four to eight hours are typically spent handling the situation — which prevents other trains from using the track, said Hania.

Rail Vision systems use AI to monitor the tracks and prevent such workflow slow-downs, or quickly alert operators when they do occur — giving them time to find alternate routes or plans of action.

The systems are scalable and deployable for different use cases — with some focused solely on these maintenance aspects of railway operations.

Watch a Rail Vision system at work.

The post Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways appeared first on The Official NVIDIA Blog.

Read More

How Smart Hospital Technology Can Help Cut Down on Medical Errors

Despite the feats of modern medicine, as many as 250,000 Americans die from medical errors each year — more than 6 times the number killed in car accidents.

Smart hospital AI can help avoid some of these fatalities in healthcare, just as computer vision-based driver assistance systems can improve road safety, according to AI leader Fei-Fei Li.

Whether through surgical instrument omission, a wrong drug prescription or a patient safety issue when clinicians aren’t present, “there’s just all kinds of errors that could be introduced, unintended, despite protocols that have been put together to avoid them,” said Li, computer science professor and co-director of the Stanford Institute for Human-Centered Artificial Intelligence, in a talk at the recent NVIDIA GTC. “Humans are still humans.”

By endowing healthcare spaces with smart sensors and machine learning algorithms, Li said, clinicians can help cut down medical errors and provide better patient care.

“We have to make sense of what we sense” with sensor data, said Li. “This brings in machine learning and deep learning algorithms that can turn sensed data into medical insights that are really important to keep our patients safe.”

To hear from other experts in deep learning and medicine, register free for the next GTC, running online March 21-24. GTC features talks from dozens of healthcare researchers and innovators harnessing AI for smart hospitals, drug discovery, genomics and more.

Sensor Solutions Bring Ambient Intelligence to Clinicians

Li’s interest in AI for healthcare delivery was sparked a decade ago when she was caring for a sick parent.

“The more I spent my time in ICUs and hospital rooms and even at home caring for my family, the more I saw the analogy between self-driving technology and healthcare delivery,” she said.

Her vision of sensor-driven “ambient intelligence,” outlined in a Nature paper, covers both the hospital and the home. It offers insights in operating rooms as well as the daily living spaces of individuals with chronic disease.

For example, ICU patients need a certain amount of movement to help their recovery. To ensure that patients are getting the right amount of mobility, researchers are developing smart sensor systems to automatically tag patient movements and understand their mobility levels while in critical care.

Another project used depth sensors and convolutional neural networks to assess whether clinicians were properly using hand sanitizer when entering and exiting patient rooms.

Outside of the hospital, as the global population continues to age, wearable sensors can help ensure seniors are aging healthily by monitoring mobility, sleep and medicine compliance.

The next challenge, Li said, is advancing computer vision to classify more complex human movement.

“We’re not content with these coarse activities like walking and sleeping,” she said. “What’s more important clinically are fine-grained activities.”

Protecting Patient, Caregiver Privacy 

When designing smart hospital technology, Li said, it’s important that developers prioritize privacy and security of patients, clinicians and caretakers.

“From a computer vision point of view, blurring and masking has become more and more important when it comes to human signals,” she said. “These are really important ways to mitigate private information and personal identity from being inadvertently leaked.”

In the field of data privacy, Li said, federated learning is another promising solution to protect confidential information.

Throughout the process of developing AI for healthcare, she said, developers must take a multi-stakeholder approach, involving patients, clinicians, bioethicists and government agencies in a collaborative environment.

“At the end of the day, healthcare is about humans caring for humans,” said Li. “This technology should not replace our caretakers, replace our families or replace our nurses and doctors. It’s here to augment and enhance humanity and give more dignity back to our patients.”

Watch the full talk on NVIDIA On-Demand, and sign up for GTC to learn about the latest in AI and healthcare.

The post How Smart Hospital Technology Can Help Cut Down on Medical Errors appeared first on The Official NVIDIA Blog.

Read More

Nearly 80 Percent of Financial Firms Use AI to Improve Services, Reduce Fraud

From the largest firms trading on Wall Street to banks providing customers with fraud protection to fintechs recommending best-fit products to consumers, AI is driving innovation across the financial services industry.

New research from NVIDIA found that 78 percent of financial services professionals state that their company uses accelerated computing to deliver AI-enabled applications through machine learning, deep learning or high performance computing.

The survey results, detailed in NVIDIA’s “State of AI in Financial Services” report, are based on responses from over 500 C-suite executives, developers, data scientists, engineers and IT teams working in financial services.

AI Prevents Fraud, Boosts Investments

With more than 70 billion real-time payment transactions processed globally in 2020, financial institutions need robust systems to prevent fraud and reduce costs. Accordingly, fraud detection involving payments and transactions was the top AI use case across all respondents at 31 percent, followed by conversational AI at 28 percent and algorithmic trading at 27 percent.

There was a dramatic increase in the percentage of financial institutions investing in AI use cases year-over-year. AI for underwriting increased fourfold, from 3 percent penetration in 2021 to 12 percent this year. Conversational AI jumped from 8 to 28 percent year-over-year, a 3.5x rise.

Meanwhile, AI-enabled applications for fraud detection, know your customer (KYC) and anti-money laundering (AML) all experienced growth of at least 300 percent in the latest survey. Nine of 13 use cases are now utilized by over 15 percent of financial services firms, whereas none of the use cases exceeded that penetration mark in last year’s report.

Future investment plans remain steady for top AI cases, with enterprise investment priorities for the next six to 12 months marked in green.

Top Current AI Use Cases in Financial Services (Ranked by Industry Sector)

Top Current AI Use Cases in Financial Services (Ranked by Industry Sector)
Green highlighted text signifies top AI use cases for investment in next six to 12 months.

Overcoming AI Challenges

Financial services professionals highlighted the main benefits of AI in yielding more accurate models, creating a competitive advantage and improving customer experience. Overall, 47 percent said that AI enables more accurate models for applications such as fraud detection, risk calculation and product recommendations.

However, there are challenges in achieving a company’s AI goals. Only 16 percent of survey respondents agreed that their company is spending the right amount of money on AI, and 37 percent believed “lack of budget” is the primary challenge in achieving their AI goals. Additional obstacles included too few data scientists, lack of data, and explainability, with a third of respondents listing each option.

Financial institutions such as Munich Re, Scotiabank and Wells Fargo have developed explainable AI models to explain lending decisions and construct diversified portfolios.

Biggest Challenges in Achieving Your Company’s AI Goals (by Role)Biggest Challenges in Achieving Your Company’s AI Goals (by Role)

Cybersecurity, data sovereignty, data gravity and the option to deploy on-prem, in the cloud or using hybrid cloud are areas of focus for financial services companies as they consider where to host their AI infrastructure. These preferences are extrapolated from responses to where companies are running most of their AI projects, with over three-quarters of the market operating on either on-prem or hybrid instances.

Where Financial Services Companies Run Their AI WorkloadsWhere Financial Services Companies Run Their AI Workloads

Executives Believe AI Is Key to Business Success

Over half of C-suite respondents agreed that AI is important to their company’s future success. The top total responses to the question “How does your company plan to invest in AI technologies in the future?” were:

  1. Hiring more AI experts (43 percent)
  2. Identifying additional AI use cases (36 percent)
  3. Engaging third-party partners to accelerate AI adoption (36 percent)
  4. Spending more on infrastructure (36 percent)
  5. Providing AI training to staff (32 percent)

However, only 23 percent overall of those surveyed believed their company has the capability and knowledge to move an AI project from research to production. This indicates the need for an end-to-end platform to develop, deploy and manage AI in enterprise applications.

Read the full “State of AI in Financial Services 2022” report to learn more.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms driving the future of financial services.

The post Nearly 80 Percent of Financial Firms Use AI to Improve Services, Reduce Fraud appeared first on The Official NVIDIA Blog.

Read More

Let Me Upgrade You: GeForce NOW Adds Resolution Upscaling and More This GFN Thursday

GeForce NOW is taking cloud gaming to new heights.

This GFN Thursday delivers an upgraded streaming experience as part of an update that is now available to all members. It includes new resolution upscaling options to make members’ gaming experiences sharper, plus the ability to customize streaming settings in session.

The GeForce NOW app is fully releasing on select LG TVs, following a successful beta. To celebrate the launch, for a limited time, those who purchase a qualifying LG TV will also receive a six-month Priority membership to kickstart their cloud gaming experience.

Additionally, this week brings five games to the GeForce NOW library.

Upscale Your Gaming Experience

The newest GeForce NOW update delivers new resolution upscaling options — including an AI-powered option for members with select NVIDIA GPUs.

Resolution upscaling update on GeForce NOW
New year, new options for January’s GeForce NOW update.

This feature, now available to all members with the 2.0.37 update, gives gamers with network bandwidth limitation or higher resolution displays sharper graphics that match the native resolution of their monitor or laptop.

Resolution upscaling works by applying sharpening effects that reduce visible blurriness while streaming. It can be applied to any game and enabled via the GeForce NOW settings in native PC and Mac apps.

Three upscaling modes are now available. Standard is enabled by default and has minimal impact on system performance. Enhanced provides a higher quality upscale, but may cause some latency depending on your system specifications. AI Enhanced, available to members playing on PC with select NVIDIA GPUs and SHIELD TVs, leverages a trained neural network model along with image sharpening for a more natural look. These new options can be adjusted mid-session.

Learn more about the new resolution upscaling options.

Stream Your Way With Custom Settings

The upgrade brings some additional benefits to members.

Custom streaming quality settings on the PC and Mac apps have been a popular way for members to take control of their streams — including bit rate, VSync and now the new upscaling modes. The update now enables members to adjust some of the streaming quality settings in session using the GeForce NOW in-game overlay. Bring up the overlay by pressing Ctrl+G > Settings > Gameplay to access these settings while streaming.

The update also comes with an improved web-streaming experience on play.geforcenow.com by automatically assigning the ideal streaming resolution for devices that are unable to decode at high streaming bitrates. Finally, there’s also a fix for launching directly into games from desktop shortcuts.

LG Has Got Game With the GeForce NOW App

LG Electronics, the first TV manufacturer to release the GeForce NOW app in beta, is now bringing cloud gaming to several LG TVs at full force.

Owners of LG 2021 4K TV models including OLED, QNED, NanoCell and UHD TVs can now download the fully launched GeForce NOW app in the LG Content Store. The experience requires a gamepad and gives gamers instant access to nearly 35 free-to-play games, like Apex Legends and Destiny 2, as well as more than 800 PC titles from popular digital stores like Steam, Epic Games Store, Ubisoft Connect and Origin.

The GeForce NOW app on LG OLED TVs delivers responsive gameplay and gorgeous, high-quality graphics at 1080p and 60 frames per second. On these select LG TVs, with nothing more than a gamepad, you can enjoy stunning ray-traced graphics and AI technologies with NVIDIA RTX ON. Learn more about support for the app for LG TVs on the system requirements page under LG TV.

GeForce NOW app for LG
Get your game on directly through an LG TV with a six-month GeForce NOW Priority membership.

In celebration of the app’s full launch and the expansion of devices supported by GeForce NOW, qualifying LG purchases from Feb. 1 to March 27 in the United States come bundled with a sweet six-month Priority membership to the service.

Priority members experience legendary GeForce PC gaming across all of their devices, as well as benefits including priority access to gaming servers, extended session lengths and RTX ON for cinematic-quality in-game graphics.

To collect a free six-month Priority membership, purchase a qualifying LG TV and submit a claim. ​Upon claim approval, you’ll receive a GeForce NOW promo code via email. Create an NVIDIA account for free or sign in to your existing GeForce NOW account to redeem the gifted membership.

This offer is available to those who purchase applicable 2021 model LG 4K TVs in select markets during the promotional period. Current GeForce NOW promotional members are not eligible for this offer. Availability and deadline to claim free membership varies by market. Consult LG’s official country website, starting Feb. 1, for full details. Terms and conditions apply.

It’s Playtime

Mortal Online 2 on GeForce NOW
Explore a massive open world and choose your own path in Mortal Online 2.

Start your weekend with the following five titles coming to the cloud this week:

While you kick off your weekend with gaming fun, we’ve got a question for you this week:

The post Let Me Upgrade You: GeForce NOW Adds Resolution Upscaling and More This GFN Thursday appeared first on The Official NVIDIA Blog.

Read More

Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs

Fisheries collect millions upon millions of fish eggs, protecting them from predators to increase fish yield and support the propagation of endangered species — but an issue with gathering so many eggs at once is that those infected with parasites can put healthy ones at risk.

Jensorter, an Oregon-based startup, has created AI-powered fish egg sorters that can rapidly identify healthy versus unhealthy eggs. The machines, built on the NVIDIA Jetson Nano module, can also detect egg characteristics such as size and fertility status.

The devices then automatically sort the eggs based on these characteristics, allowing Jensorter’s customers in Alaska, the Pacific Northwest and Russia to quickly separate viable eggs from unhealthy ones — and protect them accordingly.

Jensorter is a member of NVIDIA Inception, a program that nurtures cutting-edge startups revolutionizing industries with advancements in AI, data science, high performance computing and more.

Picking Out the Good Eggs

According to Curt Edmondson, patent counsel and CTO of Jensorter, many fisheries aim to quickly dispose of unhealthy eggs to lower the risk of infecting healthy ones.

Using AI, Jensorter machines look at characteristics like color to discern an egg’s health status and determine whether it’s fertilized — at a speed of about 30 milliseconds per egg.

“Our fish egg sorters are achieving a much higher accuracy with the addition of AI powered by NVIDIA Jetson, which is allowing us to create advanced capabilities,” Edmondson said.

The startup offers several machines, each tailored to varying volumes of eggs to be sorted. The Model JH device, optimal for egg volumes of three to 10 million, can sort nearly 200,000 eggs per hour, eliminating the slow and laborious process of hand-picking.

“Using AI to capture and process images of eggs in real time could have great value over the long term,” Edmondson said. “If hatcheries come together and centralize their images in a database, we could identify patterns of egg characteristics that lead to healthy eggs.”

This could help propagate salmon and trout, species that play important roles in their ecosystems and are common food sources for humans, and which are on the decline in many areas, he added.

The Oregon Hatchery Research Center recently used Jensorter devices to conduct an alpha test examining whether smaller eggs lead to healthier fish. In the spring, the center will use the machines to proceed with beta testing in hatcheries, before publishing study results.

Jensorter also plans to create next-generation sorters that are faster still and can detect, count and separate eggs based on their sex, number of zygotes and other metrics that would be useful to fisheries.

Watch a tutorial on how Jensorter equipment works and learn more about NVIDIA Inception.

The post Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs appeared first on The Official NVIDIA Blog.

Read More