Get Your Wish: Genshin Impact Coming to GeForce NOW

Greetings, Traveler.

Prepare for adventure. Genshin Impact, the popular open-world action role-playing game, is leaving limited beta and launching for all GeForce NOW members next week.

Gamers can get their game on today with the six total games joining the GeForce NOW library.

As announced last week, Warhammer 40,000: Darktide is coming to the cloud at launch — with GeForce technology. This September, members will be able to leap thousands of years into the future to the time of the Space Marines, streaming on GeForce NOW with NVIDIA DLSS and more.

Plus, the 2.0.41 GeForce NOW app update brings a highly requested feature: in-stream copy-and-paste support from the clipboard while streaming from the PC and Mac apps — so there’s no need to enter a long, complex password for the digital store. Get to your games even faster with this new capability.

GeForce NOW is also giving mobile gamers more options by bringing the perks of RTX 3080 memberships and PC gaming at 120 frames per second to all devices with support for 120Hz phones. The capability is rolling out in the coming weeks.

Take a Trip to Teyvat

After the success of a limited beta and receiving great feedback from members, Genshin Impact is coming next week to everyone streaming on GeForce NOW.

Embark on a journey as a traveler from another world, stranded in the fantastic land of Teyvat. Search for your missing sibling in a vast continent made up of seven nations. Master the art of elemental combat and build a dream team of over 40 uniquely skilled playable characters – like the newest additions of Yelan and Kuki Shinobu – each with their own rich stories, personalities and combat styles.

Experience the immersive campaign, dive deep into rich quests alongside iconic characters and complete daily challenges. Charge head-on into battles solo or invite friends to join the adventures. The world is constantly expanding, so bring it wherever you go across devices, streaming soon to underpowered PCs, Macs and Chromebooks on GeForce NOW.

RTX 3080 members can level up their gaming for the best experience by streaming in 4K resolution and 60 frames per second on the PC and Mac apps.

Let the Gaming Commence

All of the action this GFN Thursday kicks off with six new games arriving on the cloud. Members can also gear up for Rainbow Six Siege Year 7 Season 2.

Rainbow Six Siege Year 7 Season 2
Get ready for a new Operator, Team Deathmatch map and more in “Rainbow Six Siege” Year 7 Season 2.

Members can look for the following streaming this week:

Finally, members still have a chance to stream the PC Building Simulator 2 open beta before it ends on Monday, June 20. Experience deeper simulation, an upgraded career mode and powerful new customization features to bring your ultimate PC to life.

To start your weekend gaming adventures, we’ve got a question. Let us know your thoughts on Twitter or in the comments below.

The post Get Your Wish: Genshin Impact Coming to GeForce NOW appeared first on NVIDIA Blog.

Read More

All-In-One Financial Services? Vietnam’s MoMo Has a Super-App for That

For younger generations, paper bills, loan forms and even cash might as well be in a museum. Smartphones in hand, their financial services largely take place online.

The financial-technology companies that serve them are in a race to develop AI that can make sense of the vast amount of data the companies collect — both to provide better customer service and to improve their own backend operations.

Vietnam-based fintech company MoMo has developed a super-app that includes payment and financial transaction processing in one self-contained online commerce platform. The convenience of this all-in-one mobile platform has already attracted over 30 million users in Vietnam.

To improve the efficiency of the platform’s chatbots, know-your-customer (eKYC) systems and recommendation engines, MoMo uses NVIDIA GPUs running in Google Cloud. It uses NVIDIA DGX systems for training and batch processing.

In just a few months, MoMo has achieved impressive results in speeding development of solutions that are more robust and easy to scale. Using NVIDIA GPUs for eKYC inference tasks has resulted in a 10x speedup compared to using CPU, the company says. For the MoMo Face Payment service, using TensorRT has reduced training and inference time by 10x.

AI Offers a Different Perspective

Tuan Trinh, director of data science at MoMo, describes his company’s use of AI as a way to get a different perspective on its business. One such project processes vast amounts of data and turns it into computerized visuals or graphs that can then be analyzed to improve connectivity between users in the app.

MoMo developed its own AI algorithm that uses over a billion data points to direct recommendations of additional services and products to its customers. These offerings help maintain a line of communication with the company’s user base that helps boost engagement and conversion.

The company also deploys a recommendation box on the home screen of its super-app. This caused its click-through rate to improve dramatically as the AI prompts customers with useful recommendations and keeps them engaged.

With AI, MoMo says it can process the habits of 10 million active users over the course of the last 30-60 days to train its predictive models. In addition, NVIDIA Triton Inference Server helps unify the serving flows for recommendation engines, which significantly reduces the effort to deploy AI applications in production environments. In addition, TensorRT has contributed to 3x performance improvement of MoMo’s payment services AI model inference, boosting the customer experience.

Chatbots Advance the Conversation

MoMo’s will use AI-powered chatbots to allow it to scale up faster when accommodating and engaging with users. Chatbot services are especially effective on mobile device apps, which tend to be popular with younger users, who often prefer them over making phone calls to customer service.

Chatbot users can inquire about a product and get the support they need to evaluate it before purchasing — all from one interface — which is essential for a super-app like MoMo’s that functions as a one-stop-shop.

The chatbots are also an effective vehicle for upselling or suggesting additional services, MoMo says. When combined with machine learning, it’s possible to categorize target audiences for different products or services to customize their experience with the app.

AI chatbots have the additional benefit of freeing up MoMo’s customer service team to handle other important tasks.

Better Credit Scoring

Credit history data from all of MoMo’s 30 million-plus users can be applied to models used for risk control of financial services by using AI algorithms. MoMo has applied credit scoring to the lending services within its super-app. Since the company doesn’t solely depend on traditional deep learning for tasks that are less complex, MoMo’s development team has been able to obtain higher accuracy with shorter processing times.

The MoMo app takes less than 2 seconds to make a lending decision but is still able to reduce taking on risky lending targets with more accurate predictions from AI. This helps keep customers from taking on too much debt, and helps MoMo from missing out on potential revenue.

Since AI is capable of processing both structured and unstructured data, it’s able to incorporate information beyond traditional credit scores, like whether customers spend their money on necessities or luxuries, to assess a borrower’s risk more accurately.

Future of AI in Fintech

With fintechs increasingly applying AI to their massive data stores, MoMo’s team predicts the industry will need to evaluate how to do so in a way that keeps user data safe — or risk losing customer loyalty. MoMo already plans to expand its use of graph neural networks and models based on its proven ability to dramatically improve its operations.

The MoMo team also believes that AI could one day make credit scores obsolete. Since AI is able to make decisions based on broader unstructured data, it’s possible to determine loan approval by considering other risks besides a credit score. This would help open up the pool of potential users on fintech apps like MoMo’s to people in underserved and underbanked communities, who may not have credit scores, let alone “good” ones.

With around one in four American adults “underbanked,” which makes it more difficult for them to get a loan or credit card, and more than half of Africa’s population completely “credit invisible,” which refers to people without a bank or a credit score, MoMo believes AI could bring banking access to communities like these and open up a new user base for fintech apps at the same time.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms driving innovation in financial services. 

The post All-In-One Financial Services? Vietnam’s MoMo Has a Super-App for That appeared first on NVIDIA Blog.

Read More

A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin

JIDU Auto sees a brilliant future ahead for intelligent electric vehicles.

The EV startup, backed by tech titan Baidu, took the wraps off the Robo-01 concept vehicle last week during its virtual ROBODAY event. The robot-inspired, software-defined vehicle features cutting-edge AI capabilities powered by the high-performance NVIDIA DRIVE Orin compute platform.

The sleek compact SUV provides a glimpse of JIDU’s upcoming lineup. It’s capable of level 4 autonomous driving, safely operating at highway speeds, on busy urban roads and performing driverless valet parking.

The Robo-01 also showcases a myriad of design innovations, including a retractable yoke steering wheel that folds under the dashboard during autonomous driving mode, as well as lidar sensors that extend and retract from the hood. It features human-like interactive capabilities between passengers and the vehicle’s in-cabin AI using perception and voice recognition.

JIDU is slated to launch a limited production version of the robocar later this year.

Continuous Innovation

A defining feature of the Robo-01 concept is its ability to improve by adding new intelligent capabilities throughout the life of the vehicle.

These updates are delivered over the air, which requires a software-defined vehicle architecture built on high-performance AI compute. The Robo-01 has two NVIDIA DRIVE Orin  systems-on-a-chip (SoC) at the core of its centralized computer system, which provide ample compute for autonomous driving and AI features, with headroom to add new capabilities.

DRIVE Orin is a highly advanced autonomous vehicle processor. This supercomputer on a chip is capable of delivering up to 254 trillion operations per second (TOPS) to handle the large number of applications and deep neural networks that run simultaneously in autonomous vehicles and robots, while meeting systematic safety standards such as ISO 26262 ASIL-D.

The two DRIVE Orin SoCs at the center of JIDU vehicles will deliver more than 500 TOPS of performance to achieve the redundancy and diversity necessary for autonomous operation and in-cabin AI features.

Even More in Store

JIDU will begin taking orders in 2023 for the production version of the Robo-01, with deliveries scheduled for 2024.

The automaker plans to unveil the design of its second production model at this year’s Guangzhou Auto Show in November.

Jam-packed with intelligent features and room to add even more, the Robo-01 shows the incredible possibilities that future electric vehicles can achieve with a centralized, software-defined AI architecture.

The post A Breakthrough Preview: JIDU Auto Debuts Intelligent Robo-01 Concept Vehicle, Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

The Data Center’s Traffic Cop: AI Clears Digital Gridlock

Gal Dalal wants to ease the commute for those who work from home — or the office.

The senior research scientist at NVIDIA, who is part of a 10-person lab in Israel, is using AI to reduce congestion on computer networks.

For laptop jockeys, a spinning circle of death — or worse, a frozen cursor — is as bad as a sea of red lights on the highway. Like rush hour, it’s caused by a flood of travelers angling to get somewhere fast, crowding and sometimes colliding on the way.

AI at the Intersection

Networks use congestion control to manage digital traffic. It’s basically a set of rules embedded into network adapters and switches, but as the number of users on networks grows their conflicts can become too complex to anticipate.

AI promises to be a better traffic cop because it can see and respond to patterns as they develop. That’s why Dalal is among many researchers around the world looking for ways to make networks smarter with reinforcement learning, a type of AI that rewards models when they find good solutions.

But until now, no one’s come up with a practical approach for several reasons.

Racing the Clock

Networks need to be both fast and fair so no request gets left behind. That’s a tough balancing act when no one driver on the digital road can see the entire, ever-changing map of other drivers and their intended destinations.

And it’s a race against the clock. To be effective, networks need to respond to situations in about a microsecond, that’s one-millionth of a second.

To smooth traffic, the NVIDIA team created new  reinforcement learning techniques inspired by state-of-the-art computer game AI and adapted them to the networking problem.

Part of their breakthrough, described in a 2021 paper, was coming up with an algorithm and a corresponding reward function for a balanced network based only on local information available to individual network streams. The algorithm enabled the team to create, train and run an AI model on their NVIDIA DGX system.

A Wow Factor

Dalal recalls the meeting where a fellow Nvidian, Chen Tessler, showed the first chart plotting the model’s results on a simulated InfiniBand data center network.

“We were like, wow, ok, it works very nicely,” said Dalal, who wrote his Ph.D. thesis on reinforcement learning at Technion, Israel’s prestigious technical university.

“What was especially gratifying was we trained the model on just 32 network flows, and it nicely generalized what it learned to manage more than 8,000 flows with all sorts of intricate situations, so the machine was doing a much better job than preset rules,” he added.

Reinforcement learning for congestion control
Reinforcement learning (purple) outperformed all rule-based congestion control algorithms in NVIDIA’s tests.

In fact, the algorithm delivered at least 1.5x better throughput and 4x lower latency than the best rule-based technique.

Since the paper’s release, the work’s won praise as a real-world application that shows the potential of reinforcement learning.

Processing AI in the Network

The next big step, still a work in progress, is to design a version of the AI model that can run at microsecond speeds using the limited compute and memory resources in the network. Dalal described two paths forward.

His team is collaborating with the engineers designing NVIDIA BlueField DPUs to optimize the AI models for future hardware. BlueField DPUs aim to run inside the network an expanding set of communications jobs, offloading tasks from overburdened CPUs.

Separately, Dalal’s team is distilling the essence of its AI model into a machine learning technique called boosting trees, a series of yes/no decisions that’s nearly as smart but much simpler to run. The team aims to present its work later this year in a form that could be immediately adopted to ease network traffic.

A Timely Traffic Solution

To date, Dalal has applied reinforcement learning to everything from autonomous vehicles to data center cooling and chip design. When NVIDIA acquired Mellanox in April 2020, the NVIDIA Israel researcher started collaborating with his new colleagues in the nearby networking group.

“It made sense to apply our AI algorithms to the work of their congestion control teams, and now, two years later, the research is more mature,” he said.

It’s good timing. Recent reports of double-digit increases in Israel’s car traffic since pre-pandemic times could encourage more people to work from home, driving up network congestion.

Luckily, an AI traffic cop is on the way.

The post The Data Center’s Traffic Cop: AI Clears Digital Gridlock appeared first on NVIDIA Blog.

Read More

3D Environment Artist Jacinta Vu Sets the Scene ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D environment artist Jacinta Vu joins us In the NVIDIA Studio this week, showcasing her video-game-inspired scene Royal Library and 3D content creation workflow.

Based in Cincinnati, Vu specializes in transforming 2D concept art into 3D models and scenes, a critical contribution she made to The Dragon Prince from Wonderstorm Games.

Vu’s work emits a variety of colors and textures, from models to fully fleshed-out scenes.

Her artistic endeavors often start by drawing low-poly game assets by hand that look like beautiful paintings, her original intention in stylizing Royal Library.

“Around the time of Royal Library, my style was very hand-painted and I wanted to work more towards League of Legends and World of Warcraft styles,” Vu said. “My vision for this project, however, was very different. Royal Library is based on concept art and very different if you compare it.”

Fine attention to detail on individual models is the foundation of creating a stunning scene.

Vu began her creative workflow crafting 3D models in Autodesk Maya, slowly building out the larger scene. Deploying her GeForce RTX 2080 GPU unlocked the GPU-accelerated viewport, enabling Vu’s modeling and animation workflows to be faster and more interactive. This left her free to ideate and unlock creativity, all while saving valuable time.

“Being able to make those fast, precise tweaks was really nice,” Vu said. “Especially since, when you’re making a modular kit for an interior versus an exterior, there is less room to mess up because buildings are made to be perfect structurally.”

Practice makes perfect. The NVIDIA Studio YouTube channel hosts many helpful tutorials, including how to quickly model a scene render using a blocking technique in Autodesk Maya.

Vu then used ZBrush’s customizable brushes to shape and sculpt some models in finer detail.

Next, Vu deployed Marmoset Toolbag and baked her models quickly with RTX-acceleration in mere seconds, saving rendering time later in the process.

Vu then shifts gears to lighting where her mentor encouraged her to go big, literally, saying, “Wouldn’t it be cool to do all this bounce lighting in this big, expansive building?”

Here, Vu experimented with lighting techniques that take advantage of several GPU-accelerated features. In Unreal Engine 4.26, RTX-accelerated ray tracing and NVIDIA DLSS, powered by AI and Tensor Cores, make scene refinement simpler and faster. With the release of Unreal Engine 5, Vu then tried Lumen, UE5’s fully dynamic global illumination system, which gives her the ability to light her scene in stunning detail.

Composition is a key part of the process, noted Vu, “When building a composition, you really want to look into the natural lines of architecture that lead your eye to a focal point.”

Normally Vu would apply her hand-painted texture style to the finished model, but as she continued to refine the scene, it made more and more sense to lean into realistic visuals, especially with RTX GPU hardware to support her creative ambition.

“It’s actually really weird, because I think I was stuck in the process for a while where I had lighting set up, the camera set up, the models were done except for textures,” said Vu. “For me that was hard, because I am from a hand-painted background and switching textures was nerve wracking.”

Applying realistic textures and precise lighting brings the Royal Library to life.

Vu created her textures in Adobe Photoshop and then used Substance 3D Painter to apply various colors and materials directly to her 3D models. NVIDIA RTX and NVIDIA Iray technology in the viewport enable Vu to edit in real time and use ray-traced baking for faster rendering speeds — all accelerated by her GPU.

Vu returns to Unreal Engine 5 to animate the scene using the Sequencer feature. The sparkly effect comes from a godray, amplified by particle effects, combined with atmospheric fog to fill the room.

 

All that’s left are final renders. Vu renders her full-fidelity scene in lightning speed with UE5’s RTX-accelerated Path Tracer.

At last, the Royal Library is ready for visitors, friends and distinguished guests.

Vu, proud to have finally completed Royal Library, reflected on her creative journey, saying, “In the last stretch, I said, ‘I actually know how to do this.’ Once again I was in my head thinking I couldn’t do something, but it was freeing and it’s the type of thing where I learned so much for my next one. I know I can do a lot more a lot quicker, because I know how to do it and I can keep practicing, so I can get to the quality I want.”

NVIDIA Studio exists to unlock creative potential. It provides the resources, innovation and know-how to assist passionate content creators, like Vu.

3D environment artist Jacinta Vu is on ArtStation and Twitter.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Environment Artist Jacinta Vu Sets the Scene ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

Powered Up: 5G and VR Accelerate Vehicle Battery Design

Traveling the scenic route between Wantage, a small town in Oxfordshire, and Coventry in the U.K. meanders up steep hills, past the birthplace of Shakespeare and skirts around 19th-century English bathhouses.

A project using edge computing and the world’s first 5G-enabled VR technology is enabling two engineering teams in those locales, about 70 miles apart, to collaborate as if they were in the same room.

The project is taking place at Hyperbat, the U.K.’s largest independent electric vehicle battery manufacturer. The company’s engineers are able to work simultaneously on a 1:1-scale digital twin of an EV battery.

They can immerse themselves in virtual tasks that mimic real life thanks to renders created using NVIDIA GPUs, RTX Virtual Workstation software and NVIDIA CloudXR technology. The digital transformation results in reduced inefficiencies and faster design processes.

Working in a New Reality

The team at Hyperbat, in partnership with BT, Ericsson, the GRID Factory, Masters of Pie, Qualcomm and NVIDIA, has developed a proof of concept that uses VR to power collaborative sessions.

Using a digital twin with VR delivers greater clarity during the design process. Engineers can work together from anywhere to effectively identify and rectify errors during the vehicle battery design process, making projects more cost-effective.

“This digital twin solution at Hyperbat is the future of manufacturing,” said Marc Overton, managing director of Division X, part of BT’s Enterprise business. “It shows how a 5G private network can provide the foundation for a whole host of new technologies which can have a truly transformative effect in terms of collaboration, innovation and speeding up the manufacturing process.”

See Hyperbat’s system in action:

Masters of Pie’s collaboration engine, called Radical, delivers a real-time extended reality (XR) experience that allows design and manufacturing teams to freely interact with a 3D, lifesize model of an electric vehicle battery. This gives the Hyperbat team a single source of truth for each project — no need for numerous iterations.

The 5G-enabled VR headset, powered by the Qualcomm Snapdragon XR2 platform, gives the team an untethered experience that can be launched with just one click. Designed specifically to address all the challenges of extended reality, it doesn’t require a lengthy setup, nor the importing and exporting of data. Designers can put on their headsets and get straight to work.

Speed Is Key

5G’s ultra-low latency, deployed using an Ericsson radio and private 5G network at Hyperbat, provides faster speeds and more reliable connections, as well as immediate response times.

Combining 5G with the cloud and XR removes inefficiencies in design processes and speeds up production lines, improvements that could greatly benefit the wider manufacturing sector.

And using Project Aurora — NVIDIA’s CloudXR and RTX Virtual Workstation software platform for XR streaming at the edge of the 5G network — large amounts of data can be rapidly processed on remote computers before being streamed to VR headsets with ultra-low latency.

Innovation on a New Scale

AI is reshaping almost every industry. VR and augmented reality provide windows for AI in industry and new design possibilities, with 5G making the technology more accessible.

“Hyperbat’s use case is another demonstration of how 5G and digitalization can really help boost the U.K.’s economy and industry,” said Katherine Ainley, CEO of Ericsson U.K. and Ireland. This technology “can really drive efficiency and help us innovate on a whole new scale,” she said.

Learn more about NVIDIA CloudXR.

The post Powered Up: 5G and VR Accelerate Vehicle Battery Design appeared first on NVIDIA Blog.

Read More

From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine

NVIDIA is collaborating with clinical organizations across Europe to bring AI to the point of care, bolstering clinical pathways with efficiency gains and new data dimensions that can be included in medical decision-making processes.

The University Hospital Essen, in northwestern Germany, is one such organization taking machine learning from the bits to the bedside — using NVIDIA technology and AI to build smart hospitals of the future.

Jens Kleesiek and Felix Nensa, professors at the School of Medicine of the University of Duisburg Essen, are part of a four-person team leading the research groups that established the Institute for Artificial Intelligence in Medicine (IKIM). The technology developed by IKIM is integrated with the IT infrastructure of University Hospital Essen.

IKIM hosts a data annotation lab, overseen by a team of board-certified radiologists, that accelerates the labeling of anatomic structures in medical images using MONAI, an open-source, PyTorch-based framework for building, training, labeling and deploying AI models for healthcare imaging.

MONAI was created by NVIDIA in collaboration with over a dozen leading clinical and research organizations, including King’s College London.

IKIM researchers also use self-supervised learning to pretrain AI models that generate high-quality labels for the hospital’s CT scans, MRIs and more.

Additionally, the IKIM team has developed a smart hospital information platform, or SHIP, an AI-based central healthcare data integration platform and deployment engine. The platform is used by researchers and clinicians to conduct real-time analysis of the slew of data in university hospitals — including medical imaging, radiology reports, clinic notes and patient interviews.

SHIP can, for example, flag an abnormality on a radiology report and notify physicians via real-time push notifications, enabling quicker diagnoses and treatments for patients. The AI can also pinpoint data-driven associations between healthcare metrics like genetic traits and patient outcomes.

“We want to solve real-world problems and bring the solutions right into the clinics,” Kleesiek said. “The SHIP framework is capable of delivering deep learning algorithms that analyze data straight to the clinicians who are at the point of care.”

Plus, increased workflow efficiency — enabled by AI — means increased sustainability within hospitals.

Making Hospitals Smarter

Nensa says his hospital currently has close to 500 IT systems, including those for hospital information, laboratories and radiology. Each consists of critical patient information that’s interrelated — but data from disparate systems can be difficult to connect or draw machine learning-based insights from.

SHIP connects the data from all such systems by automatically translating it into a description standard called fast healthcare interoperability resources, or FHIR, which is commonly used in medicine to exchange electronic health records. SHIP currently encompasses more than 1.2 billion FHIR.

Once converted to FHIR, the information can be easily accessed by data scientists, researchers and clinicians for real-time AI training and analysis based on NVIDIA GPUs and DGX A100 systems. This makes it possible for labor-intensive tasks, such as liver volumetry prior to living donor liver transplantation or bone age estimation in children, to be performed fully automatically in the background, instead of requiring a half-hour of manual work by a radiologist.

“The more artificial intelligence is at work in a hospital, the more patients can enjoy human intelligence,” Nensa said. “As AI provides doctors and nurses relief from repetitive tasks like data retrieval and annotation, the medical professionals can focus on what they really want to do, which is to be there and care for their patients.”

NVIDIA DGX A100 systems power IKIM’s AI training and inference. NVIDIA Triton Inference Server enables fast and scalable concurrent serving of AI models within the clinic.

The IKIM team also uses NVIDIA FLARE, an open-source platform for federated learning, which allows data scientists to develop generalizable and robust AI models while maintaining patient privacy.

Smarter Equals Greener

In addition to reducing physician workload and increasing time for patient care, AI in hospitals boosts sustainability efforts.

As a highly specialized medical center, the University Hospital Essen must be available year-round for reliable patient treatment, with 24-hour operation times. In this way, patient-oriented, cutting-edge medicine is traditionally associated with a high consumption of energy.

SHIP helps hospitals increase efficiency, automating tasks and optimizing processes to reduce friction in the workflow — which saves energy. According to Kleesiek, IKIM reuses the energy emitted by GPUs in the data center, which also helps to make the University Hospital Essen greener.

“NVIDIA is providing all of the layers for us to get the most out of the technology, from software and hardware to training led by expert engineers,” Nensa said.

In April, NVIDIA experts hosted a workshop at IKIM, featuring lectures and hands-on training on GPU-accelerated deep learning, data science and AI in medicine. The workshop led IKIM to kickstart additional projects using AI for medicine — including a research contribution to MONAI.

In addition, IKIM is building SmartWard technology to provide an end-to-end AI-powered patient experience in hospitals, from service robots in waiting areas to automated discharge reports.

For the SmartWard project, the IKIM team is considering integrating the NVIDIA Clara Holoscan platform for medical device AI computing.

Subscribe to NVIDIA healthcare news and watch IKIM’s NVIDIA GTC session on demand.

Feature image courtesy of University of Duisburg-Essen.

The post From Code to Clinic, Smart Hospital Tech Boosts Efficiency, Sustainability in Medicine appeared first on NVIDIA Blog.

Read More

Out of This World: ‘Mass Effect Legendary Edition’ and ‘It Takes Two’ Lead GFN Thursday Updates

Some may call this GFN Thursday legendary as Mass Effect Legendary Edition and It Takes Two join the GeForce NOW library.

Both games expand the available number of Electronic Arts games streaming from our GeForce cloud servers, and are part of 10 new additions this week.

Adventure Awaits In The Cloud

Relive the saga of Commander Shepard in the highly acclaimed “Mass Effect” trilogy with Mass Effect Legendary Edition (Steam and Origin). One person is all that stands between humanity and the greatest threat it’s ever faced. With each action controlling the outcome of every mission, every relationship, every battle and even the fate of the galaxy itself, you decide how the story unfolds.

Play as clashing couple Cody and May, two humans turned into dolls and trapped in a fantastical world in It Takes Two (Steam and Origin). Challenged with saving their relationship, master unique and connected abilities to help each other across an abundance of obstacles and enjoy laugh-out-loud moments. Invite a friend to join for free with Friend’s Pass and work as a team in this heartfelt and hilarious experience.

GeForce NOW gamers can experience both of these beloved games today across compatible devices. RTX 3080 members can take Mass Effect Legendary Edition to the max with 4K resolution and 60 frames per second on the PC and Mac apps. They can also bring It Takes Two on the go streaming at 120 frames per second to select mobile phones.

Plus, RTX 3080 membership gets the perks of ultra-low latency, dedicated RTX 3080 servers and eight-hour-long gaming sessions to support their play.

No Time Like Playtime

Pro Cycling Manager on GeForce NOW
Recruitment, budget, strategy: you make all the decisions in Pro Cycling Manager 2022.

GFN Thursday always means more great gaming. This week comes with 10 new games available to stream on the cloud:

Finally, as you begin your quest known as “The Weekend,” we’ve got a question for you. Let us know your response on Twitter or in the comments below.

The post Out of This World: ‘Mass Effect Legendary Edition’ and ‘It Takes Two’ Lead GFN Thursday Updates appeared first on NVIDIA Blog.

Read More

Stunning Insights from James Webb Space Telescope Are Coming, Thanks to GPU-Powered Deep Learning

NVIDIA GPUs will play a key role interpreting data streaming in from the James Webb Space Telescope, with NASA preparing to release next month the first full-color images from the $10 billion scientific instrument.

The telescope’s iconic array of 18 interlocking hexagonal mirrors, which span a total of 21 feet 4 inches, will be able to peer far deeper into the universe, and deeper into the universe’s past, than any tool to date, unlocking discoveries for years to come.

GPU-powered deep learning will play a key role in several of the highest-profile efforts to process data from the revolutionary telescope positioned a million miles away from Earth, explains UC Santa Cruz Astronomy and Astrophysics Professor Brant Robertson.

“The JWST will really enable us to see the universe in a new way that we’ve never seen before,” said Robertson, who is playing a leading role in efforts to use AI to take advantage of the unprecedented opportunities JWST creates. “So it’s really exciting.”

High-Stakes Science

Late last year, Robertson was among the millions tensely following the runup to the launch of the telescope, developed over the course of three decades, and loaded with instruments that define the leading edge of science.

The JWST’s Christmas Day launch went better than planned, allowing the telescope to slide into a LaGrange point — a kind of gravitational eddy in space that allows an object to “park” indefinitely — and extending the telescope’s usable life to more than 10 years.

“It’s working fantastically,” Robertson reports. “All of the signs are it’s going to be a tremendous facility for science.”

AI Powering New Discoveries

Robertson — who leads the computational astrophysics group at UC Santa Cruz — is among a new generation of scientists across a growing array of disciplines using AI to quickly classify the vast quantities of data — often more than can be sifted in a human lifetime — streaming in from the latest generation of scientific instruments.

“What’s great about AI and machine learning is that you can train a model to actually make those decisions for you in a way that is less hands-on and more based on a set of metrics that you define,” Robertson said.

Simulated image of a portion of the JADES galaxy survey, part of the preparations for galaxy surveys using JWST UCSC astronomer Brant Robertson and his team have been working on for years. (Image credit: JADES Collaboration)

Working with Ryan Hausen, a Ph.D. student in UC Santa Cruz’s computer science department, Robertson helped create a deep learning framework that classifies astronomical objects, such as galaxies, based on the raw data streaming out of telescopes on a pixel by pixel basis, which they called Morpheus.

It quickly became a key tool for classifying images from the Hubble Space Telescope. Since then the team working on Morpheus has grown considerably, to roughly a half-dozen people at UC Santa Cruz.

Researchers are able to use NVIDIA GPUs to accelerate Morpheus across a variety of platforms — from an NVIDIA DGX Station desktop AI system to a small computing cluster equipped with several dozen NVIDIA V100 Tensor Core GPUs to sophisticated simulations runs thousands of GPUs on the Summit supercomputer at Oak Ridge National Laboratory.

A Trio of High-Profile Projects

Now, with the first science data from the JWST due for release July 12, much more’s coming.

“We’ll be applying that same framework to all of the major extragalactic JWST surveys that will be conducted in the first year,” Robertson.

Robertson is among a team of nearly 50 researchers who will be mapping the earliest structure of the universe through the COSMOS-Webb program, the largest general observer program selected for JWST’s first year.

Simulations by UCSC researchers showed how JWST can be used to map the distribution of galaxies in the early universe. The web-like structure in the background of this image is dark matter, and the yellow dots are galaxies that should be detected in the survey. (Image credit: Nicole Drakos)

Over the course of more than 200 hours, the COSMOS-Webb program will survey half a million galaxies with multiband, high-resolution, near-infrared imaging and an unprecedented 32,000 galaxies in mid-infrared.

“The COSMOS-Webb project is the largest contiguous area survey that will be executed with JWST for the foreseeable future,” Robertson said.

Robertson also serves on the steering committee for the JWST Advanced Deep Extragalactic Survey, or JADES, to produce infrared imaging and spectroscopy of unprecedented depth. Robertson and his team will put Morpheus to work classifying the survey’s findings.

Robertson and his team are also involved with another survey, dubbed PRIMER, to bring AI and machine learning classification capabilities to the effort.

From Studying the Stars to Studying Ourselves

All these efforts promise to help humanity survey — and understand — far more of our universe than ever before. But perhaps the most surprising application Robertson has found for Morpheus is here at home.

“We’ve actually trained Morpheus to go back into satellite data and automatically count up how much sea ice is present in the North Atlantic over time,” Robertson said, adding it could help scientists better understand and model climate change.

As a result, a tool developed to help us better understand the history of our universe may soon help us better predict the future of our own small place in it.

FEATURED IMAGE CREDIT: NASA

 

The post Stunning Insights from James Webb Space Telescope Are Coming, Thanks to GPU-Powered Deep Learning appeared first on NVIDIA Blog.

Read More

Festo Develops With Isaac Sim to Drive Its Industrial Automation

Dionysios Satikidis was playing FIFA 19 when he realized the simulated soccer game’s realism offered a glimpse into the future for training robots.

An expert in AI and autonomous systems at Festo, a German industrial control and automation company, he believed the worlds of gaming and robotics would intersect.

“I’ve always been passionate about technology and gaming, and for me and my close colleagues it was clear that someday we will need the gaming tools to create autonomous robots,” said Satikidis, based in Esslingen Germany.

It was a view shared by teammate Jan Seyler, head of advanced control and analytics at Festo; and Dimitrios Lagamtzis, who worked with Festo at that time in 2019.

Satikidis and his colleagues had begun keeping close tabs on NVIDIA and grew increasingly curious about Isaac Sim, a robotics simulation application and synthetic data generation tool built on NVIDIA Omniverse, the 3D design and simulation platform.

Finally, watching from the sidelines of the field wasn’t enough.

“I set up a call with NVIDIA, and when Dieter Fox, senior director of robotics research at NVIDIA, came on the call, I just asked if they were willing to work with us,” he said.

And that’s when it really started.

Tackling Sim-to-Real Challenge

Today Satikidis and a small team at Festo are developing AI for robotics automation. As a player in hardware and pneumatics used in robotics, Festo is making a move into AI-driven simulation, aiming at future Festo products.

Festo uses Isaac Sim to develop skills for its collaborative robots, or cobots. That requires building an awareness of their environments, human partners and tasks.

The lab is focused on narrowing the sim-to-real gap for a robotic arm, developing simulation that improves perception for real robots.

For building perception, its AI models are trained on synthetic data generated by Omniverse Replicator.

“Festo is working on its own cobots, which they plan to ship in 2023 in Europe,” said Satikidis.

Applying Cortex for Automation 

Festo uses Isaac Cortex, a tool in Isaac Sim, to simplify programming for cobot skills. Cortex is a framework for coordinating the Isaac tools into a cohesive robotic system to control virtual robots in Omniverse and physical robots in the real world.

“Our goal is to make programming task-aware robots as easy as programming gaming AIs,” said Nathan Ratliff, director of systems software at NVIDIA, in a recent GTC presentation.

Isaac Sim is a simulation suite that provides a diverse set of tools for robotics simulation. It enables sensor simulation, synthetic data generation, world representation, robot modeling and other capabilities.

The Omniverse platform and its Isaac Sim tools have been a game changer for Festo.

“This is incredible because you can manifest a video game to a real robot,” said Satikidis.

To learn more, check out the GTC session Isaac Cortex: A Decision Framework for Virtual and Physical Robots

The post Festo Develops With Isaac Sim to Drive Its Industrial Automation appeared first on NVIDIA Blog.

Read More