NVIDIA Partners With NHS Trusts to Deploy AI Platform in UK Hospitals

NVIDIA Partners With NHS Trusts to Deploy AI Platform in UK Hospitals

A consortium of 10 National Health Service Trusts — the publicly funded healthcare system in England — is now deploying the MONAI-based AIDE platform across four of its hospitals, providing AI-enabled disease-detection tools to healthcare professionals serving 5 million patients a year.

AIDE, short for AI Deployment Engine, is expected to be rolled out next year across 11 NHS hospitals serving 18 million patients, bringing AI capabilities to clinicians. It’s built on MONAI, an open-source medical imaging AI framework co-developed by NVIDIA and the AI Centre, which allows AI applications to interface with hospital systems.

Together, MONAI and AIDE enable safe and effective validation, deployment and evaluation of medical imaging AI models, which the NHS will apply in diagnosing and treating cancers, stroke, dementia and other conditions. The platform is being deployed at the following facilities: Guy’s and St Thomas’s, King’s College Hospital, East Kent Hospital University and University College London Hospitals NHS Foundation Trusts.

“Deployment of this infrastructure for clinical AI tools is a hugely exciting step in integrating AI into healthcare services,” said James Teo, professor of neurology and data science at King’s College Hospital NHS. “These platforms will provide a scalable way for clinicians to deploy healthcare AI tools to support decision-making to improve the speed and precision of patient care. This is the start of a digital transformation journey with strong, safe and open foundations.”

MONAI Making Hospital Integration Easier

Introduced in 2019, MONAI is reducing the complexity of medical workflows from R&D to the clinic. It allows developers to easily build and deploy AI applications, resulting in a model ready for clinical integration, and making it easier to interpret medical exams and unlock new levels of knowledge about patients.

MONAI provides deep learning infrastructure and workflows optimized for medical imaging. MONAI, with more than 650,000 downloads, is used by leading healthcare institutions Guy’s and St Thomas’ Hospital and King’s College Hospital in the U.K., for its ability to harness the power and potential of medical imaging data to simplify and streamline the process for building AI models.

“Across the healthcare ecosystem, researchers, hospitals and startups are realizing the power of incorporating a streamlined AI pipeline into their work,” said Haris Shuaib, AI transformation lead at the AI Centre. “The open-source MONAI ecosystem is standardizing hundreds of AI algorithms for maximum interoperability and impact, enabling their deployment in just a few weeks instead of three-to-six months.”

Built in collaboration with the AI Centre for Value Based Healthcare — a consortium of universities, hospitals and industry partners led by King’s College London and Guy’s and St Thomas’ NHS Foundation Trust — AIDE brings the capabilities of AI to clinicians. This solution equips clinicians with improved information about patients, making healthcare data more accessible and interoperable, in order to improve patient care.

The AI Centre has already developed algorithms to improve diagnosis of COVID-19, breast cancer, brain tumor, stroke detection and dementia risk. AIDE connects approved AI algorithms to a patient’s medical record seamlessly and securely, with the data never leaving the hospital trust.

Once the clinical data has been analyzed, the results are sent back to the electronic healthcare record to support clinical decision-making. This provides another valuable data point for clinical multidisciplinary teams when reviewing patients’ cases. It’s hoped that AIDE can support speeding up this process to benefit patients.

“The AI Centre has done invaluable work towards integrating AI into national healthcare. Deploying MONAI is a critical milestone in our journey to enable the use of safe and robot AI innovations within the clinic,” said Professor Sebastien Ourselin, deputy director of the AI Centre. “This could only be achieved through our strong partnerships between academic and industry leaders like NVIDIA.”

The code for AIDE will be made open source and published on GitHub on Dec. 7. AIDE will be displayed in the South Hall of the McCormick Place convention center in Chicago as part of the RSNA Imaging AI in Practice demonstration.

Get started with MONAI and watch the NVIDIA RSNA special address.

The post NVIDIA Partners With NHS Trusts to Deploy AI Platform in UK Hospitals appeared first on NVIDIA Blog.

Read More

Turn Black Friday Into Green Thursday With New GeForce NOW Deal

Turn Black Friday Into Green Thursday With New GeForce NOW Deal

Black Friday is now Green Thursday with a great deal on GeForce NOW this week.

For a limited time, get a free $20-value GeForce NOW membership gift card with every purchase of a $50-value GeForce NOW membership gift card. Treat yourself and a buddy to high-performance cloud gaming — there’s never been a better time to share the love of GeForce NOW.

Plus, kick off a gaming-filled weekend with four new titles joining the GeForce NOW library.

Instant Streaming, Instant Savings

For one week only, from Nov. 23-Dec. 2, purchase a $50-value gift card — good toward a three-month RTX 3080 membership or a six-month Priority membership — and get a bonus $20-value GeForce NOW membership gift card for free, which is good toward a one-month RTX 3080 membership or a two-month Priority membership.

Recipients will be able to redeem these gift cards for the GeForce NOW membership level of their choice. The $20-value free gift card will be delivered as a digital code — providing instant savings for instant streaming. Learn more details.

GeForce NOW Green Thursday Gift Card Deal
Green is the new black with this time-limited Black Friday deal.

With a paid membership, gamers get access to stream over 1,400 PC games with longer gaming sessions and real-time ray tracing for supported games across nearly all devices, even those that aren’t game ready. Priority members can stream up to 1080p at 60 frames per second, and RTX 3080 members can stream up to 4K at 60 FPS or 1440p at 120 FPS.

This special offer is valid on $50-value digital or physical gift card purchases, making it a perfect stocking stuffer or last-minute gift. Snag the deal to make Black Friday shopping stress-free this year.

Time to Play

Evil West on GeForce NOW
Evil never sleeps … but it bleeds!

The best way to celebrate a shiny new GeForce NOW membership is with the new games available to stream this GFN Thursday. Start out with Evil West from Focus Entertainment, a vampire-hunting third-person action game set in a fantasy version of the Old West. Play as a lone hunter or co-op with a buddy to explore and eradicate the vampire threat while upgrading weapons and tools along the way.

Check out this week’s new games here:

  • Evil West (New release on Steam)
  • Ship of Fools (New release on Steam)
  • Crysis 2 Remastered (Steam)
  • Crysis 3 Remastered (Steam)

Before you dig into your weekend gaming, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Turn Black Friday Into Green Thursday With New GeForce NOW Deal appeared first on NVIDIA Blog.

Read More

What Is a Smart Hospital?

What Is a Smart Hospital?

Smart hospitals — which utilize data and AI insights to facilitate decision-making at each stage of the patient experience — can provide medical professionals with insights that enable better and faster care.

A smart hospital uses data and technology to accelerate and enhance the work healthcare professionals and hospital management are already doing, such as tracking hospital bed occupancy, monitoring patients’ vital signs and analyzing radiology scans.

What’s the Difference Between a Smart Hospital and a Traditional Hospital? 

Hospitals are continuously generating and collecting data, much of which is now digitized. This creates an opportunity for them to apply such technologies as data analytics and AI for improved insights.

Data that was once stored as a paper file with a patient’s medical history, lab results and immunization information is now stored as electronic health records, or EHRs. Digital CT and MRI scanners, as well as software including the PACS medical imaging storage system, are replacing analog radiology tools. And connected sensors in hospital rooms and operating theaters can record multiple continuous streams of data for real-time and retrospective analysis.

As hospitals transition to these digital tools, they’re poised to make the shift from a regular hospital to a smart hospital — one that not only collects data, but also analyzes it to provide valuable, timely insights.

Natural language processing models can rapidly pull insights from complex pathology reports to support cancer care. Data science can monitor emergency room wait times to resolve bottlenecks. AI-enabled robotics can assist surgeons in the operating room. And video analytics can detect when hand sanitizer supplies are running low or a patient needs attention — such as detecting the risk of falls in the hospital or at home.

What Are Some Benefits of a Smart Hospital?

Smart hospital technology benefits healthcare systems, medical professionals and patients in the following ways: 

  • Healthcare providers: Smart hospital data can be used to help healthcare facilities optimize their limited resources, increasing operational efficiency for a better patient-centric approach. Sensors can monitor patients when they’re alone in the room. AI algorithms can help inform which patients should be prioritized based on the severity of their case. And telehealth solutions can help deliver care to patients outside of hospital visits.
  • Clinicians: Smart hospital tools can enable doctors, nurses, medical imaging technicians and other healthcare experts to spend more time focusing on patient care by taking care of routine or laborious tasks, such as writing notes about each patient interaction, segmenting anatomical structures in an MRI or converting doctor’s notes into medical codes for insurance billing. They can also aid clinical decision-making with AI algorithms that provide a second opinion or triage recommendation for individual patients based on historical data.
  • Patients: Smart hospital technology can bring health services closer to the goal of consistent, high-quality patient care — anywhere in the world, from any doctor. Clinicians vary in skill level, areas of expertise, access to resources and time available per patient. By deploying AI and robotics to monitor patterns and automate time-consuming tasks, smart hospitals can allow clinicians to focus on interacting with their patients for a better experience.

How Can I Make My Hospital Smart? 

Running a smart hospital requires an entire ecosystem of hardware and software solutions working in harmony with clinician workflows. To accelerate and improve patient care, every application, device, sensor and AI model in the system must share data and insights across the institution.

Think of the smart hospital as an octopus. Its head is the organization’s secure server that stores and processes the entire facility’s data. Each of its tentacles is a different department — emergency room, ICU, operating room, radiology lab — covered in sensors (octopus suckers) that take in data from their surroundings.

If each tentacle operated in a silo, it would be impossible for the octopus to take rapid action across its entire body based on the information sensed by a single arm. Every tentacle sends data back to the octopus’ central brain, enabling the creature to flexibly respond to its changing environment.

In the same way, the smart hospital is a hub-and-spoke model, with sensors distributed across a facility that can send critical insights back to a central brain, helping inform facility-wide decisions. For instance, if camera feeds in an operating room show that a surgical procedure is almost complete, AI would alert staff in the recovery room to be ready for the patient’s arrival.

To power smart hospital solutions, medical device companies, academic medical centers and startups are turning to NVIDIA Clara, an end-to-end AI platform that integrates with the entire hospital network — from medical devices running real-time applications to secure servers that store and process data in the long term. It supports edge, data center and cloud infrastructure, numerous software libraries, and a global partner ecosystem to power the coming generation of smart hospitals.

Smart Hospital Operations and Patient Monitoring

A bustling hospital has innumerable moving parts — patients, staff, medicine and equipment — presenting an opportunity for AI automation to optimize operations around the facility.

While a doctor or nurse can’t be at a patient’s side at every moment of their hospital stay, a combination of intelligent video analytics and other smart sensors can closely monitor patients, alerting healthcare providers when the person is in distress and needs attention.

In an ICU, for instance, patients are connected to monitoring devices that continuously collect vital signs. Many of these continuously beep with various alerts, which can lead healthcare practitioners to sometimes overlook the alarm of a single sensor.

By instead aggregating the streaming data from multiple devices into a single feed, AI algorithms can analyze the data in real time, helping more quickly detect if a patient’s condition takes a sudden turn for the better or worse.

The Houston Methodist Institute for Academic Medicine is working with Mark III Systems, an Elite member of the NVIDIA Partner Network, to deploy an AI-based tool called DeepStroke that can detect stroke symptoms in triage more accurately and earlier based on a patient’s speech and facial movements. By integrating these AI models into the emergency room workflow, the hospital can more quickly identify the proper treatment for stroke patients, helping ensure clinicians don’t miss patients who would potentially benefit from life-saving treatments.

Using enterprise-grade solutions from Dell and NVIDIA — including GPU-accelerated Dell PowerEdge servers, the NVIDIA Fleet Command hybrid cloud system and the DeepStream software development kit for AI streaming analytics — Inception startup Artisight manages a smart hospital network including over 2,000 cameras and microphones at Northwestern Medicine.

One of Artisight’s models alerts nurses and physicians to patients at risk of harm. Another system, based on in-door positioning system data, automates clinic workflows to maximize staff productivity and improve patient satisfaction. A third detects preoperative, intraoperative and postoperative events to coordinate surgical throughput.

These systems make it easy to add functionality regardless of location: an AI-backed sensor network that monitors hospital rooms to prevent a patient from falling can also detect when hospital supplies are running low, or when an operating room needs to be cleaned.The systems even extend beyond the hospital walls via Artisight’s integrated teleconsult tools to monitor at-risk patients at home.

The last key element of healthcare operations is medical coding, the process of turning a clinician’s notes into a set of alphanumeric codes representing every diagnosis and procedure. These codes are of particular significance in the U.S., where they form the basis for the bills that doctors, clinics and hospitals submit to stakeholders including insurance providers and patients.

Inception startup Fathom has developed AI models to automate the painstaking process of medical coding, reducing costs while increasing speed and precision. Founded in 2016, the company works with the nation’s largest health systems, billing companies and physician groups, coding over 20 million patient encounters annually.

Medical Imaging in Smart Hospitals

Deep learning first gained its popularity as a tool for identifying objects in images. This is one of the earliest healthcare industry uses for the technology, too. There are dozens of AI models with regulatory approval in the medical imaging space, helping radiology departments in smart hospitals accelerate the analysis of CT, MRI and X-ray data.

AI can pre-screen scans, flagging areas that require a radiologist’s attention to save time — giving them more bandwidth to look at additional scans or explain results to patients. It can move critical cases like brain bleeds to the top of a radiologist’s worklist, shortening the time to diagnose and treat life-threatening cases. And it can enhance the resolution of radiology images, allowing clinicians to reduce the necessary dosage per patient.

Leading medical imaging companies and researchers are using NVIDIA technology to power next-generation applications that can be used in smart hospital environments.

Siemens Healthineers developed deep learning-based autocontouring solutions, enabling precise contouring of organs at risk in radiation therapy.

And Fujifilm Healthcare uses NVIDIA GPUs to power its Cardio StillShot software, which conducts precise cardiac imaging during a CT scan. To accelerate its work, the team used software including the NVIDIA Optical Flow SDK to estimate pixel-level motion and NVIDIA Nsight Compute to optimize performance.

Startups in NVIDIA Inception, too, are advancing medical imaging workflows with AI, such as Shanghai-based United Imaging Intelligence. The company’s uAI platform empowers devices, doctors and researchers with full-stack, full-spectrum AI applications, covering imaging, screening, follow-up, diagnosis, treatment and evaluation. Its uVision intelligent scanning system runs on the NVIDIA Jetson edge AI platform.

Learn more about startups using NVIDIA AI for medical imaging applications.

Digital and Robotic Surgery in Smart Hospitals

In a smart hospital’s operating room, intelligent video analytics and robotics are embedded to take in data and provide AI-powered alerts and guidance to surgeons.

Medical device developers and startups are working on tools to advance surgical training, help surgeons plan procedures ahead of time, provide real-time support and monitoring during an operation, and aid in post-surgery recordkeeping and retrospective analysis.

Paris-based robotic surgery company Moon Surgical is designing Maestro, an accessible, adaptive surgical-assistant robotics system that works with the equipment and workflows that operating rooms already have in place. The startup has adopted NVIDIA Clara Holoscan to save time and resources, helping compress its development timeline.

Activ Surgical has selected Holoscan to accelerate development of its AI and augmented-reality solution for real-time surgical guidance. The Boston-based company’s ActivSight technology allows surgeons to view critical physiological structures and functions, like blood flow, that cannot be seen with the naked eye.

And London-based Proximie will use Holoscan to enable telepresence in the operating room, bringing expert surgeons and AI solutions into each procedure. By integrating this information into surgical imaging systems, the company aims to reduce surgical complication rates, improving patient safety and care.

Telemedicine — Smart Hospital Technology at Home

Another part of smart hospital technology is ensuring patients who don’t need to be admitted to the hospital can receive care from home through wearables, smartphone apps, video appointments, phone calls and text-based messaging tools. Tools like these reduce the burden on healthcare facilities — particularly with the use of AI chatbots that can communicate effectively with patients.

Natural language processing AI is powering intelligent voice assistants and chatbots for telemedicine at companies like Curai, a member of the NVIDIA Inception global network of startups.

Curai is applying GPU-powered AI to connect patients, providers and care teams via a chat-based application. Patients can input information about their conditions, access their medical profiles and chat with providers 24/7. The app also supports providers by offering diagnostic and treatment suggestions based on Curai’s deep learning algorithms.

Curai’s main areas of AI focus have been natural language processing (for extracting data from medical conversations), medical reasoning (for providing diagnosis and treatment recommendations), and image processing and classification (largely for images uploaded by patients).

Virtual care tools like Curai’s can be used for preventative or convenient care at any time, or after a patient’s doctor visit to ensure they’re responding well to treatment.

Medical Research Using Smart Hospital Data 

The usefulness of smart hospital data doesn’t end when a patient is discharged — it can inform years of research, becoming part of an institution’s database that helps improve operational efficiency, preventative care, drug discovery and more. With collaborative tools like federated learning, the benefits can go beyond a single medical institution and improve research across the healthcare field globally.

Neurosurgical Atlas, the largest association of neurosurgeons in the world, aims to advance the care of patients suffering from neurosurgical disorders through new, efficient surgical techniques. The Atlas includes a library of surgery recordings and simulations that give neurosurgeons unprecedented understanding of potential pitfalls before conducting an operation, creating a new standard for technical excellence. In the future, Neurosurgical Atlas plans to enable digital twin representations specific to individual patients.

The University of Florida’s academic health center, UF Health, has used digital health records representing more than 50 million interactions with 2 million patients to train GatorTron, a model that can help identify patients for lifesaving clinical trials, predict and alert health teams about life-threatening conditions, and provide clinical decision support to doctors.

The electronic medical records were also used to develop SynGatorTron, a language model that can generate synthetic health records to help augment small datasets — or enable AI model sharing while preserving the privacy of real patient data.

In Texas, MD Anderson is harnessing hospital records for population data analysis. Using the NVIDIA NeMo toolkit for natural language processing, the researchers developed a conversational AI platform that performs genomic analysis with cancer omics data — including survival analysis, mutation analysis and sequencing data processing.

Learn more about smart hospital technology and subscribe to NVIDIA healthcare news

The post What Is a Smart Hospital? appeared first on NVIDIA Blog.

Read More

Creators and Artists Take the Spotlight This Week ‘In the NVIDIA Studio’

Creators and Artists Take the Spotlight This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

In the NVIDIA Studio artists have sparked the imagination of and inspired countless creators to exceed their creative ambitions and do their best work.

We’re showcasing the work of these artists — who specialize in 3D modeling, AI, video editing and broadcasting — this week, as well as how the new GeForce RTX 40 Series line of GPUs makes the creative process easier and more efficient.

These powerful graphics cards are backed by NVIDIA Studio — an ecosystem of creative app optimizations, dedicated NVIDIA Studio Drivers and NVIDIA AI-powered apps. Check out the latest GeForce RTX 40 Series GPUs and NVIDIA Studio laptops for the best performance in content creation, gaming and more.

In addition, the community around NVIDIA Omniverse, a 3D design collaboration and simulation platform that enables artists to connect their favorite 3D tools for more seamless workflows, is partnering with NVIDIA Studio on the #WinterArtChallenge. Join the Omniverse team live on Twitch as they create a scene and answer questions on Wednesday, Nov. 30, at 11 a.m. PT. Add the event to your calendar.

Finally, just in time this holiday season, check out our latest NVIDIA Studio Standout featuring whimsical, realistic, food inspired artwork, and the artists behind it. We dare you not to get hungry.

GeForce RTX 4080 GPU Delivers Impressive Performance

Members of the press and content creators have been putting the new GeForce RTX 4080 GPU through a wide variety of creative workflows. Here’s a sampling of their reviews:

The new GeForce RTX 4080 GPU.

“The addition of AV1 encoding means that any 40-series GPU—and I mean any of them—is going to make your PC substantially faster at this kind of rendering compared to any of the other GPUs we’ve tested here.” Linus Tech Tips

“If you are using a non-RTX GPU, you are missing out on a massive suite of applications and support to give you limitless possibilities as a streamer, YouTuber, podcaster, artist, animator and more.”CG Magazine

“For 3D animators, there’s nothing better than a GeForce RTX 4080 in combo with NVIDIA STUDIO drivers and future DLSS 3 support for Twinmotion, V-Ray, Unity, Cinema 4D, Arnold, Adobe Designer, 3D Painter and 3D Sampler.”Tuttotech.net

“As far as I’m concerned this thing is a no-brainer for anyone who does graphic intensive work, works in video production, or does high end streaming.“ Jay Lippman

“Overall, the RTX 4080 16GB Founders Edition Graphics Card is an excellent choice for Content Creators and CG Artists who have been desperately looking for an upgrade over the past 2-3 years! For 3D GPU Rendering Workloads, in particular, we’re happy to finally see a GPU that deserves a recommendation.” CG Director

“As far as the 4080 goes for creative individuals, I’ve got no doubt that if you’re rendering 3D models or 4K video, you’re going to have a fantastic time with this GPU. There’s also now dual AV1 video encoders on board which means that you can stream at higher resolutions with the likes of Discord.”Press Start

Pick up the GeForce RTX 4080 GPU or a prebuilt system today using our Product Finder.

Character Creator Pablo Muñoz Gómez

Concept artist Pablo Muñoz Gómez is equally passionate about helping digital artists — teaching 3D classes and running the ZBrush Guides website — as he is about his own creative specialties: concept and character artistry.

Linework refinement from 2D to 3D in ZBrush.

HARVESTERS is a demo concept Gómez created to illustrate a complete ZBrush workflow for his students. He upgraded his render linework with color palette blocking and refinement, and finished with a Z-depth pass to create a depth-of-field effect.

Final shading in ‘HARVESTERS.’

Gómez also excels in photorealistic 3D character modeling, as evidenced in his piece Tadpole.

Gómez often uses Adobe Substance 3D Painter to apply colors and materials directly to his 3D models. NVIDIA Iray technology in the viewport enables Gómez to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by his hardware. Artists can expect even faster asset baking with GeForce RTX 40 Series GPUs.

 

For further customization, Gómez prefers to download assets from the vast Substance 3D Asset library and import into Substance 3D Sampler, adjusting a few sliders to create photorealistic materials. RTX-exclusive interactive ray tracing lets Gómez apply realistic effects in real time. Powered by GeForce RTX 40 Series GPUs, these tasks can be completed even faster than with the previous generation.

Smooth movement in the Adobe Substance 3D Stager viewport, thanks to RTX GPU acceleration.

With GeForce RTX 40 Series GPUs, 3D artists like Gómez can now build scenes in fully ray-traced environments with accurate physics and realistic materials — all in real time, without proxies, in the NVIDIA Omniverse beta.

DLSS 3 technology uses the AI-powered RTX Tensor Cores and a new Optical Flow Accelerator to generate additional frames and dramatically increase frames per second (FPS). This improves smoothness and speeds up movement in the viewport. NVIDIA is also working with popular 3D apps Unity and Unreal Engine to integrate DLSS 3.

Gómez is the founder of ZBrush Guides and the 3D Concept Artist academy. View his courses, tutorials, projects and more on his website.

Karen X. Cheng Has an AI on the Future

Karen X. Cheng is an award-winning director on the forefront of using AI to design amazing visuals. Her innovative work produces eye-catching effects in social media videos for brands like Adobe, Beats by Dre and Instagram. Her videos have garnered over 500 million views.

Cheng was quick to embrace the AI-powered NVIDIA Canvas app — a free download available to anyone with a GeForce RTX GPU. With it, she easily created and shared photorealistic imagery. NVIDIA Canvas is powered by the GauGAN2 AI model and accelerated by Tensor Cores found exclusively on RTX GPUs.

Use AI to turn simple brushstrokes into realistic landscape images with NVIDIA Canvas.

The app uses AI to interpret basic lines and shapes, translating them into realistic landscape images and textures. Artists of all skill levels can use this advanced AI to quickly turn simple brushstrokes into realistic images, speeding up concept exploration and allowing for increased iteration. This frees up valuable time to visualize ideas.

Lately, Cheng’s focus has been on Instant NeRF technology, which uses AI models to transform 2D images into high-resolution 3D scenes nearly instantly.

She and her collaborators have been experimenting with it to bring 2D scenes to life in 3D, and the result was an extraordinary mirror NeRF complete with clouds and stunning camera movement.

Cheng and team also created a sidewalk NeRF that garnered over 1 million views on Instagram.

 

A NeRF is a computationally intensive algorithm that processes complex scenes. The new line of GeForce RTX 40 Series GPUs is a creator’s best bet to navigate these workflows and finalize artwork as quickly as possible.

Check out Cheng’s incredible collection of art on Instagram.

Lights, Camera, Action, WATCHHOLLIE

Compassionate, colorful, caps-lock incarnate — that’s WATCHHOLLIE. Trained as a video editor, WATCHHOLLIE experimented with a YouTube channel before discovering Twitch as a way to get back into gaming.

Her streams promote mental health awareness and inclusivity, establishing a safe place for members of the LGBTQ+ community like herself. She gives back to the creative community as a founder of WatchUs, a diversity-focused team that teaches aspiring creators how to grow their business, develop brand partnerships and improve their streaming setup.

WATCHHOLLIE and her fellow livestreamers can pick up GeForce RTX 40 Series GPUs featuring the eighth-generation NVIDIA video encoder (NVENC), which offers a 40% increase efficiency with AV1 encoding, unlocking higher resolution and crisper image quality. OBS Studio and Discord have enabled AV1 for 1440p and 4K resolution at 60 FPS.

In addition, GeForce RTX 40 Series GPUs feature dual encoders that allow creators to capture up to 8K60. When it’s time to cut a video on demand of livestreams, the dual encoders work in tandem to divide the work automatically, slashing export times nearly in half.

Blackmagic Design’s DaVinci Resolve, the popular Voukoder plug-in for Adobe Premiere Pro (WATCHHOLIE’s preferred software) and Jianying — the top video editing app in China — have all enabled dual encoder through encode presets to export final files, fast.

Gaming livestreamers using GeForce RTX 40 Series GPUs will experience an unprecedented gen-to-gen frame-rate boost in PC games alongside NVIDIA DLSS 3 technology, which accelerates performance by up to 4x.

Follow and subscribe to WATCHHOLLIE’s social media channels.

Join the #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Check out @Prayag_13’s winter scene full of whimsical holiday details:

Be sure to tag #WinterArtChallenge to join. Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post Creators and Artists Take the Spotlight This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Startup Uses Speech AI to Coach Contact-Center Agents Into Boosting Customer Satisfaction

Startup Uses Speech AI to Coach Contact-Center Agents Into Boosting Customer Satisfaction

Minerva CQ, a startup based in the San Francisco Bay Area, is making customer service calls quicker and more efficient for both agents and customers, with a focus on those in the energy sector.

The NVIDIA Inception member’s name is a mashup of the Roman goddess of wisdom and knowledge — and collaborative intelligence (CQ), or the combination of human and artificial intelligence.

The Minerva CQ platform coaches contact-center agents to drive customer conversations — whether in voice or web-based chat — toward the most effective resolutions by offering real-time dialogue suggestions, sentiment analysis and optimal journey flows based on the customer’s intent. It also surfaces relevant context, articles, forms and more.

Powered by the NVIDIA Riva software development kit, Minerva CQ has best-in-class automatic speech recognition (ASR) capabilities in English, Spanish and Italian.

“Many contact-center solutions focus on automation through a chatbot, but our solution lets the AI augment humans to do a better job, because when humans and machines work together, they can accomplish more than what the human or machine alone could,” said Cosimo Spera, founder and CEO of Minerva CQ.

The platform first transcribes a conversation into text in real time. That text is then fed into Minerva CQ’s AI models that analyze customer sentiment, intent, propensity and more.

Minerva CQ then offers agents the best path to help their customers, along with other optional resolution paths.

The speech AI platform can understand voice- and text-based conversations within both the context of a specific exchange and the customer’s broader relationship with the business, according to Jack Garrett, vision architect at Minerva CQ.

Watch a demo of Minerva CQ at work:

Speech AI Powered by NVIDIA Riva

Minerva CQ last month announced that it built what it says is the first and most accurate Italian ASR model for enterprises, adding to the platform’s existing English and Spanish capabilities. The Italian ASR model has a word error rate of under 7% and is expected to be deployed early next year at a global energy company and telecoms provider.

“When we were looking for the best combination of accuracy, speed and cost to help us build the ASR model, NVIDIA Riva was at the top of our list,” Spera said.

Riva enables Minerva CQ to offer real-time responses. This means the AI platform can stream, process and transcribe conversations — all in less than 300 milliseconds, or in a blink of an eye.

“Riva is also fully customizable to solve our customers’ unique problems and comes with industry-leading out-of-the-box accuracy,” said Daniel Hong, chief marketing officer at Minerva CQ. “We were able to quickly and efficiently fine-tune the pretrained language models with help from experts on the NVIDIA Riva team.”

Access to technical experts is one benefit of being part of NVIDIA Inception, a free, global program that nurtures cutting-edge startups. Spera listed AWS credits, support on experimental projects, and collaboration on go-to-market strategy among the ways Inception has bolstered Minerva CQ.

In addition to Riva, Minerva CQ uses the NVIDIA NeMo framework to build and train its conversational AI models, as well as the NVIDIA Triton Inference Server to deliver fast, scalable AI model deployment.

Complementing its focus on the customer, Minerva CQ is also dedicated to agent wellness and building capabilities to track agent satisfaction and experience. The platform enables employees to be experts at their jobs from day one — which greatly reduces stress on agents, instills confidence, and lowers attrition rates and operational costs.

Plus, Minerva CQ automatically provides summary reports of conversations, giving agents and supervisors helpful feedback, and analytics teams powerful business insights.

“All in all, Minerva CQ empowers agents with knowledge and allows them to be confident in the information they share with customers,” Hong said. “Easy customer inquiries can be tackled by automated self-service or AI chatbots, so when the agents are hit with complex questions, Minerva can help.”

Focus on Retail Energy, Electrification

Minerva CQ’s initial deployments are focused on retail energy and electrification.

For retail energy providers, the platform offers agents simple, consistent explanations of energy sources, tariff plans, billing changes and optimal spending choices.

It also assists agents to resolve complex problems for electric vehicle customers, and helps EV technicians troubleshoot infrastructure and logistics issues.

“Retail energy and electrification are inherently intertwined in the movement toward decarbonization, but they can still be relatively siloed in the market space,” Garrett said. “Minerva helps bring them together.”

Minerva CQ is deployed by a leading electric mobility company as well as one of the largest utilities in the world, according to Spera.

These clients’ contact centers across the U.S. and Mexico have seen a 40% decrease in average handle time for a customer service call thanks to Minerva CQ, Spera said. Deployment is planned to expand further into the Spanish-speaking market — as well as in countries where Italian is spoken.

“We all want to save the planet, but it’s important that change come from the bottom up by empowering end users to make steps toward decarbonization,” Spera said. “Our focus is on providing customers with information so they can best transition to clean-energy-source subscriptions.”

He added, “In the coming years, we’d like to see the brand Minerva CQ become synonymous with electrification and decarbonization.”

Learn more about NVIDIA’s work with utilities and apply to join NVIDIA Inception.

The post Startup Uses Speech AI to Coach Contact-Center Agents Into Boosting Customer Satisfaction appeared first on NVIDIA Blog.

Read More

See a Sea Change: 3D Researchers Bring Naval History to Life

See a Sea Change: 3D Researchers Bring Naval History to Life

Museumgoers will be able to explore two sunken WWII ships as if they were scuba divers on the ocean floor, thanks to work at Curtin University in Perth, Australia.

Exhibits in development, for display in Australia and potentially further afield, will use exquisitely detailed 3D models the researchers are creating to tell the story of one of the nation’s greatest naval battles.

On Nov. 19, 1941, Australia’s HMAS Sydney (II) and Germany’s HSK Kormoran lobbed hundreds of shells in a duel that lasted less than an hour. More than 700 died, including every sailor on the Sydney. Both ships sank 8,000 feet, 130 miles off the coast of Western Australia, not to be discovered for decades.

Sydney, now a WWII shipwreck off Perth
HMAS Sydney (II) in 1940. (Photo: Allan C. Green from the State Library of Victoria)

Andrew Woods, an expert in stereoscopic 3D visualization and associate professor at Curtin, built an underwater rig with more than a dozen video and still cameras to capture details of the wrecks in 2015.

Ash Doshi, a computer vision specialist and senior research officer at Curtin, is developing and running software on NVIDIA GPUs that stitches the half-million pictures and 300 hours of video they took into virtual and printed 3D models.

3D at Battleship Scale

It’s hard, pioneering work in a process called photogrammetry. Commercially available software maxes out at around 10,000 images.

“It’s highly computationally intensive — when you double the number of images, you quadruple the compute requirements,” said Woods, who manages the Curtin HIVE, a lab with four advanced visualization systems.

“It would’ve taken a thousand years to process with our existing systems, even though they are fairly fast,” he said.

When completed next year, the work will have taken less than three years, thanks to systems at the nearby Pawsey Supercomputing Centre using NVIDIA V100 and prior-generation GPUs.

Speed Enables Iteration

Accelerated computing is critical because the work is iterative. Images must be processed, manipulated and then reprocessed.

For example, Woods said a first pass on a batch of 400 images would take 10 hours on his laptop. By contrast, he could run a first pass in 10 minutes on his system with two NVIDIA RTX A6000 GPUs awarded through NVIDIA’s Applied Research Accelerator Program.

It would take a month to process 8,000 images on the lab’s fast PCs, work the supercomputer could handle in a day. “Rarely would anyone in industry wait a month to process a dataset,” said Woods.

From Films to VR

Local curators can’t wait to get the Sydney and Kormoran models on display. Half the comments on their Tripadvisor page already celebrate 3D films the team took of the wrecks.

The digital models will more deeply engage museumgoers with interactive virtual and augmented reality exhibits and large-scale 3D prints.

“These 3D models really help us unravel the story, so people can appreciate the history,” Woods said.

Kormoran, WWII shipwreck off Perth
In a video call, Woods and Doshi show how forces embedded an anchor in the Kormoran’s hull as it sank.

The exhibits are expected to tour museums in Perth and Sydney, and potentially cities in Germany and the U.K., where the ships were built.

When the project is complete, the researchers aim to make their code available so others can turn historic artifacts on the seabed into rare museum pieces. Woods expects the software could also find commercial uses monitoring undersea pipelines, oil and gas rigs and more.

A Real-Time Tool

On the horizon, the researchers want to try Instant NeRF, an inverse rendering tool NVIDIA researchers developed to turn 2D images into 3D models in real time.

Woods imagines using it on future shipwreck surveys, possibly running on an NVIDIA DGX System on the survey vessel. It could provide previews in near real time based on images gathered by remotely operated underwater vehicles on the ocean floor, letting the team know when it has enough data to take back for processing on a supercomputer.

“We really don’t want to return to base to find we’ve missed a spot,” said Woods.

Woods’ passion for 3D has its roots in the sea.

“I saw the movie Jaws 3D when I was a teenager, and the images of sharks exploding out of the screen are in part responsible for taking me down this path,” he said.

The researchers released the video below to commemorate the 81st anniversary of the sinking of the WWII ships.

https://hive.curtin.edu.au/SK81st

The post See a Sea Change: 3D Researchers Bring Naval History to Life appeared first on NVIDIA Blog.

Read More

A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE

A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE

Meet the electric SUV with magnetic appeal.

Lucid Group unveiled its next act, the Gravity SUV, during the AutoMobility Los Angeles auto show. The automaker also launched additional versions of the hit Lucid Air sedan — Air Pure and Air Touring.

Both models offer the future-ready DreamDrive Pro driver-assistance system, powered by the NVIDIA DRIVE platform.

Lucid launched the Air late last year to widespread acclaim. The luxury sedan won MotorTrend’s Car of the Year for 2022, with a chart-topping battery range of up to 516 miles and fast charging.

The newly introduced variants provide updated features for a wider audience. Air Pure is designed for agility, with a lightweight, compact battery and industry-leading aerodynamics.

Air Touring is the heart of the lineup, featuring more horsepower and battery range than the Pure and greater flexibility in customer options.

Lucid Air Pure

Gravity builds on this stellar reputation with an aerodynamic, spacious and intelligent design, all backed by the high-performance, centralized compute of NVIDIA DRIVE.

“Just as Lucid Air redefined the sedan category, so too will Gravity impact the world of luxury SUVs, setting new benchmarks across the board,” said Lucid Group CEO and CTO Peter Rawlinson.

Capable and Enjoyable

DreamDrive Pro is software defined, continuously improving via over-the-air software updates.

It uses a rich suite of 14 cameras, one lidar, five radars and 12 ultrasonics running on NVIDIA DRIVE for robust automated driving and intelligent cockpit features, including surround-view monitoring, blind-spot display and highway assist.

In addition to a diversity of sensors, Lucid’s dual-rail power system and proprietary Ethernet Ring offer a high degree of redundancy for key systems, such as braking and steering.

The DreamDrive Pro system uses an array of sensors and NVIDIA DRIVE high-performance compute for intelligent driving features.

“The Lucid Air is at its core a software-defined vehicle, meaning a large part of the experience is delivered by the software,” Rawlinson said. “This makes the Lucid Air more capable and enjoyable with every passing update.”

Prepare to Launch

These new Lucid vehicles are nearly ready for liftoff.

The Lucid Air Touring has already begun production, and Air Pure will start in December, with customer deliveries soon to follow.

The automaker will open reservations for the Lucid Gravity in the spring, slating deliveries to begin in 2024.

The post A Force to Be Reckoned With: Lucid Group Reveals Gravity SUV, Built on NVIDIA DRIVE appeared first on NVIDIA Blog.

Read More

MoMA Installation Marks Breakthrough for AI Art

MoMA Installation Marks Breakthrough for AI Art

AI-generated art has arrived.

With a presentation making its debut this week at The Museum of Modern Art in New York City — perhaps the world’s premier institution devoted to modern and contemporary art — the AI technologies that have upended trillion-dollar industries worldwide over the past decade will get a formal introduction.

Created by pioneering artist Refik Anadol, the installation in the museum’s soaring Gund Lobby uses a sophisticated machine-learning model to interpret the publicly available visual and informational data of MoMA’s collection.

“Right now, we are in a renaissance,” Anadol said of the presentation “Refik Anadol: Unsupervised.” “Having AI in the medium is completely and profoundly changing the profession.”

Anadol is a digital media pioneer. Throughout his career, he’s been intrigued by the intersection between art and AI. His first encounter with AI as an artistic tool was at Google, where he used deep learning — and an NVIDIA GeForce GTX 1080 Ti — to create dynamic digital artworks.

In 2017, he started working with one of the first generative AI tools, StyleGAN, created at NVIDIA Research, which was able to generate synthetic images of faces that are incredibly realistic.

Anadol was more intrigued by the ability to use the tool to explore more abstract images, training StyleGAN not on images of faces, but of modern art, and guiding the AI’s synthesis using data streaming in from optical, temperature and acoustic sensors.

Digging Deep With MoMA

Those ideas led him to an online collaboration with The Museum of Modern Art in 2021, which was exhibited by Feral File, using more than 138,000 records from the museum’s publicly available archive. The Feral File exhibit caused an online sensation, reimagining art in real time and inspiring the wave of AI-generated art that’s spread quickly through social media communities on Instagram, Twitter, Discord and Reddit this year.

This year, he returned to MoMA to dig even deeper, collaborating again with MoMA curators Michelle Kuo and Paola Antonelli on a new major installation. On view from Nov. 19 through March 5, 2023, “Refik Anadol: Unsupervised” will use AI to interpret and transform more than 200 years of art from MoMA’s collection.

It’s an exploration not just of the world’s foremost collection of modern art — pretty much every single pioneering sculptor, painter and even game designer of the past two centuries — but a look inside the mind of AI, allowing us to see results of the algorithm processing data from MoMA’s collection, as well as ambient sound, temperature and light, and ‘dreaming,’” Anadol said.

Powering the system is a full suite of NVIDIA technologies. He relies on an NVIDIA DGX server equipped with NVIDIA A100 Tensor Core GPUs to train the model in real time. Another machine equipped with an NVIDIA RTX 4090 GPU translates the model into computer graphics, driving the exhibit’s display.

‘Bending Data’

“Refik is bending data — which we normally associate with rational systems — into a realm of surrealism and irrationality,” Michelle Kuo, the exhibit’s curator at the museum, told the New York Times. “His interpretation of MoMA’s dataset is essentially a transformation of the history of modern art.”

The installation comes amid a wave of excitement around generative AI, a technology that’s been put at the fingertips of amateur and professional artists alike with new tools such as Midjourney, OpenAI’s Dall·E, and DreamStudio.

And while Anadol’s work intersects with the surge in interest in NFT art that had the world buzzing in 2021, like AI-generated art, it goes far beyond it.

Inspired by Cutting-Edge Research

Anadol’s work digs deep into MoMA’s archives and cutting-edge AI, relying on a technology developed at NVIDIA Research called StyleGAN. David Luebke, vice president of graphics research at NVIDIA, said he first got excited about generative AI’s artistic and creative possibilities when he saw NVIDIA researcher Janne Hellsten’s demo of StyleGAN2 trained on stylized artistic portraits.

“Suddenly, one could fluidly explore the content and style of a generated image or have it react to ambient effects like sound or even weather,” Luebke said.

NVIDIA Research has been pushing forward the state of the art in generative AI since at least 2017 when NVIDIA developed “Progressive GANs,” which used AI to synthesize highly realistic, high-resolution images of human faces for the first time. This was followed by StyleGAN, which achieved even higher quality results.

Each year after that, NVIDIA released a paper that advanced the state of the art. StyleGAN has proved to be a versatile platform, Luebke explained, enabling countless other researchers and artists like Anadol to bring their ideas to life.

Democratizing Content Creation

Much more is coming. Modern generative AI models have shown the capability to generalize beyond particular subjects, such as images of human faces or cats or cars, and encompass language models that let users specify the image they want in natural language, or other intuitive ways, such as inpainting, Luebke explains.

“This is exciting because it democratizes content creation,” Luebke said. “Ultimately, generative AI has the potential to unlock the creativity of everybody from professional artists, like Refik, to hobbyists and casual artists, to school kids,” Luebke said.

Anadol’s work at MoMA offers a taste of what’s possible. “Refik Anadol: Unsupervised,” the artist’s first U.S. solo museum presentation, features three new digital artworks by the Los Angeles-based artist that use AI to dynamically explore MoMA’s collection on a vast 24-by-24-foot digital display. It’s as much a work of architecture as it is one of art.

“Often, AI is used to classify, process and generate realistic representations of the world,” the exhibition’s organizer Michelle Kuo, told Archinect, a leading publication covering contemporary art and architecture. “Anadol’s work, by contrast, is visionary: it explores dreams, hallucination and irrationality, posing an alternate understanding of modern art — and of artmaking itself.”

“Refik Anadol: Unsupervised” also hints at how AI will transform our future, and Anadol thinks it will be for the better. “This will just enhance our imagination,” Anadol said. “I’m seeing this as an extension of our minds.”

For more, see our exploration of Refik Anadol’s work in NVIDIA’s AI Art Gallery.

The post MoMA Installation Marks Breakthrough for AI Art appeared first on NVIDIA Blog.

Read More

Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs

Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs

Gaming in the living room is getting an upgrade with GeForce NOW.

This GFN Thursday, kick off the weekend streaming GeForce NOW on Samsung TVs, with upcoming support for 4K resolution.

Get started with the 10 new titles streaming this week.

Plus, Yes by YTL Communications, a leading 5G provider in Malaysia, today announced it will soon bring GeForce NOW powered by Yes to gamers across the country. Stay tuned for more updates.

Go Big, Go Bold With 4K on Samsung Smart TVs

GeForce NOW is making its way to 2021 Samsung Smart TV models, and is already available through the Samsung Gaming Hub on 2022 Samsung TVs, so more players than ever can stream from GeForce NOW — no downloads, storage limits or console required.

Samsung Gaming Hub
Get tuned in to the cloud just in time for these TV streaming updates.

Even better, gaming on Samsung Smart TVs will look pixel perfect in 4K resolution. 2022 Samsung TVs and select 2021 Samsung TVs will be capable of streaming in 4K, as Samsung’s leadership in game-streaming technology and AI upscaling optimizes picture quality and the entire gaming experience.

The new TV firmware will start rolling out at the end of the month, enabling 4K resolution for Samsung Smart TV streamers with an RTX 3080 membership. RTX 3080 members will be able to stream up to 4K natively on Samsung Smart TVs for the first time, as well as get maximized eight-hour gaming sessions and dedicated RTX 3080 servers.

Here to Play Today

GFN Thursday delivers new games to the cloud every week. Jump into 10 new additions streaming today.

Warhammer 40000 Darktide
Delve deep into the industrial city of Tertium to combat the forces of Chaos that lurk.

Gamers who’ve preordered Warhammer 40,000: Darktide can leap thousands of years into the future a little early. Take back the city of Tertium from hordes of bloodthirsty foes in this intense, brutal action shooter streaming the Pre-Order Beta on Steam.

Members can also look for the following titles:

  • Ballads of Hongye (New release on Steam)
  • Bravery and Greed (New release on Steam)
  • TERRACOTTA (New release on Steam and Epic Games)
  • Warhammer 40,000: Darktide (New release pre-order beta access on Steam)
  • Frozen Flame (New release on Steam, Nov. 17)
  • Goat Simulator 3 (New release on Epic Games, Nov. 17)
  • Nobody — The Turnaround (New release on Steam, Nov. 17)
  • Caveblazers (Steam)
  • The Darkest Tales (Epic Games)
  • The Tenants (Epic Games)

Then jump into the new season of Rumbleverse, the play-for-free, 40-person Brawler Royale where anyone can be a champion. Take a trip on the expanded map to Low Key Key Island, master new power moves like “Jagged Edge” and earn new gear to show off your style.

And from now until Sunday, Nov. 20, snag a special upgrade to a six-month Priority Membership for just $29.99 — 40% off the standard price of $49.99. Bring a buddy to battle with you by getting them a GeForce NOW gift card.

Before you power up to play this weekend, we’ve got a question for you. Let us know your answer on Twitter or in the comments below.

The post Get the Big Picture: Stream GeForce NOW in 4K Resolution on Samsung Smart TVs appeared first on NVIDIA Blog.

Read More

Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers

Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers

The U.S. National Oceanic and Atmospheric Administration has selected Lockheed Martin and NVIDIA to build a prototype system to accelerate outputs of Earth Environment Monitoring and their corresponding visualizations.

Using AI techniques, such a system has the potential to reduce by an order of magnitude the amount of time necessary for the output of complex weather visualizations to be generated.

The first-of-its-kind project for a U.S. federal agency, the Global Earth Observation Digital Twin, or EODT, will provide a prototype to visualize terabytes of geophysical data from the land, ocean, cryosphere, atmosphere and space.

By fusing data from a broad variety of sensor sources, the system will be able to deliver information that’s not just up to date, but that decision-makers have confidence in, explained Lockheed Martin Space Senior Research Scientist Lynn Montgomery.

“We’re providing a one-stop shop for researchers, and for next-generation systems, not only for current, but for recent past environmental data,” Montgomery said. “Our collaboration with NVIDIA will provide NOAA a timely, global visualization of their massive datasets.”

Building on NVIDIA Omniverse

Building on NVIDIA Omniverse, the system has the potential to serve as a clearinghouse for scientists and researchers from a broad range of government agencies, one that can be extended over time to support a wide range of applications.

The support for the EODT pilot project is one of several initiatives at NVIDIA to develop tools and technologies for large-scale, even planetary simulations.

Last November, NVIDIA announced it will build a supercomputer, called Earth-2, devoted to predicting climate change by creating a digital twin of the planet.

NVIDIA and Lockheed Martin announced last year that they are working with the U.S. Department of Agriculture Forest Service and Colorado Division of Fire Prevention & Control to use AI and digital-twin simulation to better understand wildfires and stop their spread.

And in March, NVIDIA announced an accelerated digital twins platform for scientific computing consisting of the NVIDIA Modulus AI framework for developing physics-ML neural network models and the NVIDIA Omniverse 3D virtual-world simulation platform.

The EODT project builds on these initiatives, relying on NVIDIA Omniverse Nucleus to allow different applications to quickly import and export custom, visualizable assets to and from the effort’s central data store.

“This is a blueprint for a complex system using Omniverse, where we will have a fusion of sensor data, architectural data and AI inferred data all combined with various visualization capacities deployed to the cloud and various workstations,” said Peter Messmer, senior manager in the HPC Developer Technology group at NVIDIA. “It’s a fantastic opportunity to highlight all these components with a real-world example.”

A Fast-Moving Effort

The effort will move fast, with a demonstration of the system’s ability to visualize sea surface temperature data slated for next September. The system will take advantage of GPU computing instances from Amazon Web Services and NVIDIA DGX and OVX servers on premises.

The fast, flexible system will provide a prototype to visualize geophysical variables from NOAA satellite and ground data sources from a broad range of sources.

These include temperature and moisture profiles, sea surface temperatures, sea ice concentrations and solar wind data, among other sources.

That data will be collected by Lockheed Martin’s OpenRosetta3D software, which is widely used for sophisticated large-scale image analysis, workflow orchestration and sensor fusion by government agencies, such as NASA, and private industry.

NVIDIA will support the development of one-way connectors to import “snapshots” of processed geospatial datasets from Lockheed’s OpenRosetta3D technology into NVIDIA Omniverse Nucleus as Universal Scene Description inputs.

USD is an open source and extensible ecosystem for describing, composing, simulating and collaborating within 3D worlds, initially invented by Pixar Animation Studios.

Omniverse Nucleus will be vital to making the data available fast, in part because of Nucleus’s ability to relay just what’s changed in a dataset, Montgomery explained.

Nucleus will, in turn, deliver those USD datasets to Lockheed’s Agatha 3D viewer, based on Unity, allowing users to quickly see data from multiple sensors on an interactive 3D earth and space platform.

The result is a system that will help researchers at NOAA, and, eventually, elsewhere, make decisions faster based on the latest available data.

The post Lockheed Martin, NVIDIA to Help US Speed Climate Data to Researchers appeared first on NVIDIA Blog.

Read More