Marbles RTX Playable Sample Now Available in NVIDIA Omniverse

Here’s a chance to become a marvel at marbles: the Marbles RTX playable sample is now available from the NVIDIA Omniverse launcher.

Marbles RTX is a physics-based mini-game level where a player controls a marble around a scene full of obstacles. The sample, which already has over 8,000 downloads, displays real-time physics with dynamic lighting and stunning, physically based materials.

The technology demo showcases NVIDIA Omniverse’s powerful suite of graphics, AI and simulation technologies. GeForce RTX gaming and NVIDIA RTX enthusiasts can download Marbles RTX and experience Omniverse’s advanced capabilities in real-time ray- and path-traced rendering, Deep Learning Super Sampling (DLSS) and complex physics simulation.

First previewed at GTC 2020, the Marbles RTX tech demo simulates the dynamic world in real time, without any precomputation or baking. It highlights NVIDIA’s advanced rendering and physics with exceptionally high-quality 3D content created from scratch.

The final Marbles RTX tech demo completed in Omniverse resulted in over 500GB of texture data, 165 unique assets that were modeled and textured by hand, more than 5,000 meshes and about 100 million polygons.

During the GeForce RTX 30 Series launch event in September, NVIDIA unveiled a more challenging take on the demo with the release of Marbles at Night RTX. This presented a night scene that contained hundreds of dynamic, animated lights. Based on NVIDIA Research, Marbles at Night RTX showcased how the power and beauty of RTX-enabled real-time ray tracing allows artists to render dynamic direct lighting and shadows from millions of area lights in real time.

The combination of physically based MDL materials and real-time, referenced path tracing in Omniverse brings high-quality details to the Marbles scene, enabling players to feel like they’re looking at real-world objects. The Omniverse RTX Renderer calculates reflections, refraction and global illumination accurately while the denoiser easily manages all the complex geometry across the entire scene.

NVIDIA PhysX 5 and Flow simulate the interaction of rigid-body objects and fluids in the scene in real time, and NVIDIA DLSS enhances the details in the image with powerful AI, allowing users to focus GPU resources on accuracy and fidelity. All these elements combined provide a unique look and feel in CGI that users typically can’t get from real-time games.

At GTC 2021, the artists behind Marbles RTX hosted an exclusive deep dive session detailing the creative and development process. Learn more about the Making of Marbles by watching the GTC session on demand, which is available now and free to access.

Download NVIDIA Omniverse and try the Marbles RTX playable sample today.

For additional resources, view the latest tutorials on Omniverse, check out the forums for support and join the Omniverse Discord server to chat with the community.

Marbles RTX Art Team

  • Creative Director: Gavriil Klimov
  • Lead 3D Artist: Gregor Kopka
  • Senior 3D artist: Andrej Stefancik
  • Lead Character Artist: Alessandro Baldasseroni
  • Lead Environment Artist: Jacob Norris
  • Senior Lighting Artist: Artur Szymczak
  • Technical Artist: Chase Telegin
  • Senior 3D Artist: Ilya Shelementsev
  • Lead VFX Artist: Fred Hooper

The post Marbles RTX Playable Sample Now Available in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More

The Roaring 20+: GFN Thursday Game Releases Include Biomutant, Maneater, Warhammer Age of Sigmar: Storm Ground and More

GFN Thursday comes roaring in with 22 games and support for three DLCs joining the GeForce NOW library this week.

Among the 22 new releases are five day-and-date game launches: Biomutant, Maneater, King of Seas, Imagine Earth and Warhammer Age of Sigmar: Storm Ground.

DLC, Without the Download

GeForce NOW ensures your favorite games are automatically up to date, avoiding game updates and patches. Simply log in, click PLAY and enjoy an optimal cloud gaming experience.

This includes supporting the latest expansions and other downloadable content — without any local downloads.

Three great games are getting new DLC, and they’re streaming on GeForce NOW.

Hunt: Showdown - The Committed on GeForce NOW
Hunt: Showdown’s newest DLC adds new hunter Henry Monroe to the mix. He doesn’t look stable to us.

Hunt Showdown — The Committed DLC contains one Legendary Hunter (Monroe), a Legendary knife (Pane) and a Legendary Romero 77 (Lock and Key). It’s available on Steam, so members can start hunting now.

Isle of Siptah, the massive expansion to the open world survival game Conan Exiles, is exiting early access and releasing on Steam today. It features a vast new island to explore, huge and vile new creatures to slay, new building sets and a host of new features. Gamers have 40 new NPC camps and points of interest to explore, three new factions of NPCs, new ways of acquiring thralls and much more.

Announced last month, Iron Harvest – Operation Eagle, the new expansion to the critically acclaimed world of Iron Harvest set in the alternate reality of 1920, is available on Steam and streaming with GeForce NOW. Guide the new faction through seven new single-player missions, while learning how to use the game’s new Aircraft units across all of the game’s playable factions, including Polania, Saxony and Rusviet.

Newest Additions of the Week

GFN Thursday wouldn’t be complete without new games. The library evolved this week, but didn’t chew you up, with five day-and-date releases, including the launch of Biomutant, from Experiment 101 and THQ Nordic.

Biomutant is now available on GeForce NOW
A gorgeous open world to explore as a weapon-wielding rodent? That’s our perfect weekend.

Biomutant (Steam)

Explore a strange new world as an ever-evolving, weapon-wielding, martial arts master anthropomorphic rodent in this featured game of the week! For more information, read here

Including Biomutant, members can expect a total of 22 games this week:

May Games Update

A few games that we planned to release in May didn’t quite make it this month. Some were due to technical issues, others are on the way. Look for updates to the below in the weeks ahead.

  • Beyond Good & Evil (Steam)
  • Child of Light (Russian version only, Ubisoft Connect)
  • Hearts of Iron III (Steam)
  • King’s Bounty: Dark Side (Steam)
  • Sabotaj (Steam)
  • Super Mecha Champions (Steam)
  • Thea: The Awakening (Steam)
  • Tomb Raider Legend (Steam)

What are you going to play? Let us know on Twitter or in the comments below.

The post The Roaring 20+: GFN Thursday Game Releases Include Biomutant, Maneater, Warhammer Age of Sigmar: Storm Ground and More appeared first on The Official NVIDIA Blog.

Read More

First-Hand Experience: Deep Learning Lets Amputee Control Prosthetic Hand, Video Games

Path-breaking work that translates an amputee’s thoughts into finger motions, and even commands in video games, holds open the possibility of humans controlling just about anything digital with their minds.

Using GPUs, a group of researchers trained an AI neural decoder able to run on a compact, power-efficient NVIDIA Jetson Nano system on module (SOM) to translate 46-year-old Shawn Findley’s thoughts into individual finger motions.

And if that breakthrough weren’t enough, the team then plugged Findley into a PC running Far Cry 5 and Raiden IV, where he had his game avatar move, jump — even fly a virtual helicopter — using his mind.

It’s a demonstration that not only promises to give amputees more natural and responsive control over their prosthetics. It could one day give users almost superhuman capabilities.

The effort is detailed in a draft paper, or pre-print, titled “A Portable, Self-Contained Neuroprosthetic Hand with Deep Learning-Based Finger Control.” It details an extraordinary cross-disciplinary collaboration behind a system that, in effect, allows humans to control just about anything digital with thoughts.

“The idea is intuitive to video gamers,” said Anh Tuan Nguyen, the paper’s lead author and now a postdoctoral researcher at the University of Minnesota advised by Associate Professor Zhi Yang.

“Instead of mapping our system to a virtual hand, we just mapped it to keystrokes — and five minutes later, we’re playing a video game,” said Nguyen, an avid gamer, who holds a bachelor’s degree in electrical engineering and Ph.D. in biomedical engineering.

Shawn Findley, who lost his hand following an accident 17 years ago, was able to use an AI decoder to translate his thoughts in real-time into actions.

In short, Findley — a pastor in East Texas who lost his hand following an accident in a machine shop 17 years ago — was able to use an AI decoder trained on an NVIDIA TITAN X GPU and deployed on the NVIDIA Jetson to translate his thoughts in real-time into actions inside a virtual environment running on, of course, yet another NVIDIA GPU, Nguyen explained.

Bionic Plan

Findley was one of a handful of patients who participated in the clinical trial supported by the U.S. Defense Advanced Research Projects Agency’s HAPTIX program.

The human physiology study is led by Edward Keefer, a neuroscientist and electrophysiologist who leads Texas-based Nerves Incorporated, and Dr. Jonathan Cheng at the University of Texas Southwestern Medical Center.

In collaboration with Yang’s and Associate Professor Qi Zhao’s labs at the University of Minnesota, the team collected large-scale human nerve data and is one of the first to implement deep learning neural decoders in a portable platform for clinical neuroprosthetic applications.

That effort aims to improve the lives of millions of amputees around the world. More than a million people lose a limb to amputation every year. That’s one every 30 seconds.

Prosthetic limbs have advanced fast over the past few decades — becoming stronger, lighter and more comfortable. But neural decoders, which decode movement intent from nerve data promise a dramatic leap forward.

With just a few hours of training, the system allowed Findley to swiftly, accurately and intuitively move the fingers on a portable prosthetic hand.

“It’s just like if I want to reach out and pick up something, I just reach out and pick up something,” reported Findley.

The key, it turns out, is the same kind of GPU-accelerated deep learning that’s now widely used for everything from online shopping to speech and voice recognition.

Teamwork

For amputees, even though their hand is long gone, parts of the system that controlled the missing hand remain.

Every time the amputee imagines grabbing, say, a cup of coffee with a lost hand, those thoughts are still accessible in the peripheral nerves once connected to the amputated body part.

To capture those thoughts, Dr. Cheng at UTSW surgically inserted arrays of microscopic electrodes into the residual median and ulnar nerves of the amputee forearm.

These electrodes, with carbon nanotube contacts, are designed by Keefer to detect the electrical signals from the peripheral nerve.

Dr. Yang’s lab designed a high-precision neural chip to acquire the tiny signals recorded by the electrodes from the residual nerves of the amputees.

Dr. Zhao’s lab then developed machine learning algorithms that decode neural signals into hand controls.

GPU-Accelerated Neural Network

Here’s where deep learning comes in.

Data collected by the patient’s nerve signals — and translated into digital signals — are then used to train a neural network that decodes the signals into specific commands for the prosthesis.

It’s a process that takes as little as two hours using a system equipped with a TITAN X or NVIDIA GeForce 1080 Ti GPU. One day users may even be able to train such systems at home, using cloud-based GPUs.

These GPUs accelerate an AI neural decoder designed based on a recurrent neural network running on the PyTorch deep learning framework.

Use of such neural networks has exploded over the past decade, giving computer scientists the ability to train systems for a vast array of tasks, from image and speech recognition to autonomous vehicles, too complex to be tackled with traditional hand-coding.

The challenge is finding hardware powerful enough to swiftly run this neural decoder, a process known as inference, and power-efficient enough to be fully portable.

Portable and powerful: Jetson Nano’s CUDA cores provide full support for popular deep learning libraries such as TensorFlow, PyTorch and Caffe.

So the team turned to the Jetson Nano, whose CUDA cores provide full support for popular deep learning libraries such as TensorFlow, PyTorch and Caffe.

“This offers the most appropriate tradeoff among power and performance for our neural decoder implementation,” Nguyen explained.

Deploying this trained neural network on the powerful, credit card sized Jetson Nano resulted in a portable, self-contained neuroprosthetic hand that gives users real-time control of individual finger movements.

Using it, Findley demonstrated both high-accuracy and low-latency control of individual finger movements in various laboratory and real-world environments.

The next step is a wireless and implantable system, so users can slip on a portable prosthetic device when needed, without any wires protruding from their body.

Nguyen sees robust, portable AI systems — able to understand and react to the human body — augmenting a host of medical devices coming in the near future.

The technology developed by the team to create AI-enabled neural interfaces is being licensed by Fasikl Incorporated, a startup sprung from Yang’s lab.

The goal is to pioneer neuromodulation systems for use by amputees and patients with neurological diseases, as well as able-bodied individuals who want to control robots or devices by thinking about it.

“When we get the system approved for nonmedical applications, I intend to be the first person to have it implanted,” Keefer said. “The devices you could control simply by thinking: drones, your keyboard, remote manipulators — it’s the next step in evolution.”

 

The post First-Hand Experience: Deep Learning Lets Amputee Control Prosthetic Hand, Video Games appeared first on The Official NVIDIA Blog.

Read More

What Is Explainable AI?

Banks use AI to determine whether to extend credit, and how much, to customers. Radiology departments deploy AI to help distinguish between healthy tissue and tumors. And HR teams employ it to work out which of hundreds of resumes should be sent on to recruiters.

These are just a few examples of how AI is being adopted across industries. And with so much at stake, businesses and governments adopting AI and machine learning are increasingly being pressed to lift the veil on how their AI models make decisions.

Charles Elkan, a managing director at Goldman Sachs, offers a sharp analogy for much of the current state of AI, in which organizations debate its trustworthiness and how to overcome objections to AI systems:

We don’t understand exactly how a bomb-sniffing dog does its job, but we place a lot of trust in the decisions they make.

To reach a better understanding of how AI models come to their decisions, organizations are turning to explainable AI.

What Is Explainable AI?

Explainable AI, or XAI, is a set of tools and techniques used by organizations to help people better understand why a model makes certain decisions and how it works. XAI is: 

  • A set of best practices: It takes advantage of some of the best procedures and rules that data scientists have been using for years to help others understand how a model is trained. Knowing how, and on what data, a model was trained helps us understand when it does and doesn’t make sense to use that model. It also shines a light on what sources of bias the model might have been exposed to.
  • A set of design principles: Researchers are increasingly focused on simplifying the building of AI systems to make them inherently easier to understand.
  • A set of tools: As the systems get easier to understand, the training models can be further refined by incorporating those learnings into it — and by offering those learnings to others for incorporation into their models.

How Does Explainable AI Work?

While there’s still a great deal of debate over the standardization of XAI processes, a few key points resonate across industries implementing it:

  • Who do we have to explain the model to?
  • How accurate or precise an explanation do we need?
  • Do we need to explain the overall model or a particular decision?
Source: DARPA

Data scientists are focusing on all these questions, but explainability boils down to: What are we trying to explain?

Explaining the pedigree of the model:

  • How was the model trained?
  • What data was used?
  • How was the impact of any bias in the training data measured and mitigated?

These questions are the data science equivalent of explaining what school your surgeon went to —  along with who their teachers were, what they studied and what grades they got. Getting this right is more about process and leaving a paper trail than it is about pure AI, but it’s critical to establishing trust in a model.

While explaining a model’s pedigree sounds fairly easy, it’s hard in practice, as many tools currently don’t support strong information-gathering. NVIDIA provides such information about its pretrained models. These are shared on the NGC catalog, a hub of GPU-optimized AI and high performance computing SDKs and models that quickly help businesses build their applications.

Explaining the overall model:

Sometimes called model interpretability, this is an active area of research. Most model explanations fall into one of two camps:

In a technique sometimes called “proxy modeling,” simpler, more easily comprehended models like decision trees can be used to approximately describe the more detailed AI model. These explanations give a “sense” of the model overall, but the tradeoff between approximation and simplicity of the proxy model is still more art than science.

Proxy modeling is always an approximation and, even if applied well, it can create opportunities for real-life decisions to be very different from what’s expected from the proxy models.

The second approach is “design for interpretability.” This limits the design and training options of the AI network in ways that attempt to assemble the overall network out of smaller parts that we force to have simpler behavior. This can lead to models that are still powerful, but with behavior that’s much easier to explain.

This isn’t as easy as it sounds, however, and it sacrifices some level of efficiency and accuracy by removing components and structures from the data scientist’s toolbox. This approach may also require significantly more computational power.

Why XAI Explains Individual Decisions Best

The best understood area of XAI is individual decision-making: why a person didn’t get approved for a loan, for instance.

Techniques with names like LIME and SHAP  offer very literal mathematical answers to this question — and the results of that math can be presented to data scientists, managers, regulators and consumers. For some data — images, audio and text — similar results can be visualized through the use of “attention” in the models — forcing the model itself to show its work.

In the case of the Shapley values used in SHAP, there are some mathematical proofs of the underlying techniques that are particularly attractive based on game theory work done in the 1950s. There is active research in using these explanations of individual decisions to explain the model as a whole, mostly focusing on clustering and forcing various smoothness constraints on the underlying math.

The drawback to these techniques is that they’re somewhat computationally expensive. In addition, without significant effort during the training of the model, the results can be very sensitive to the input data values. Some also argue that because data scientists can only calculate approximate Shapley values, the attractive and provable features of these numbers are also only approximate — sharply reducing their value.

While healthy debate remains, it’s clear that by maintaining a proper model pedigree, adopting a model explainability method that provides clarity to senior leadership on the risks involved in the model, and monitoring actual outcomes with individual explanations, AI models can be built with clearly understood behaviors.

For a closer look at examples of XAI work, check out the talks presented by Wells Fargo and ScotiaBank at NVIDIA GTC21.

The post What Is Explainable AI? appeared first on The Official NVIDIA Blog.

Read More

How Diversity Drives Innovation: Catch Up on Inclusion in AI with NVIDIA On-Demand

NVIDIA’s GPU Technology Conference is a hotbed for sharing groundbreaking innovations — making it the perfect forum for developers, students and professionals from underrepresented communities to discuss the challenges and opportunities surrounding AI.

Last month’s GTC brought together virtually tens of thousands of attendees from around the world, with more than 20,000 developers from emerging markets, hundreds of women speakers and a variety of session topics focused on diversity and inclusion in AI.

It had 6x increase in female attendees from last fall’s event, a 6x jump in Black attendees and a 5x boost in Hispanic and Latino attendees. Dozens signed up for hands-on training from the NVIDIA Deep Learning Institute and joined networking sessions hosted by NVIDIA community resource groups in collaboration with organizations like Black in AI and LatinX in AI.

More than 1,500 sessions from GTC 2021 are now available for free replay on NVIDIA On-Demand — including panel discussions on AI literacy and efforts to grow the participation of underrepresented groups in science and engineering.

Advocating for AI Literacy Among Youth

In a session called “Are You Smarter Than a Fifth Grader Who Knows AI?,” STEM advocates Justin Shaifer and Maynard Okereke (known as Mr. Fascinate and the Hip Hop M.D., respectively) led a conversation about initiatives to help young people understand AI.

Given the ubiquity of AI technologies, being surrounded by it “is essentially just how they live,” said Jim Gibbs, CEO of the Pittsburgh-based startup Meter Feeder. “They just don’t know any different.”

But school curriculums often don’t teach young people how AI technologies work, how they’re developed or about AI ethics. So it’s important to help the next generation of developers prepare “to take advantage of all the new opportunities that there are going to be for people who are familiar with machine learning and artificial intelligence,” he said.

Panelist Lisa Abel-Palmieri, CEO of the Boys & Girls Clubs of Western Pennsylvania, described how her organization’s STEM instructors audited graduate-level AI classes at Carnegie Mellon University to inform a K-12 curriculum for children from historically marginalized communities. NVIDIA recently announced a three-year AI education partnership with the organization to create an AI Pathways Toolkit that Boys & Girls Clubs nationwide can deliver to students, particularly those from underserved and underrepresented communities.

And Babak Mostaghimi, assistant superintendent of Georgia’s Gwinnett County Public Schools shared how his team helps students realize how AI is relevant to their daily experiences.

“We started really getting kids to understand that AI is already part of your everyday life,” he said. “And when kids realize that, it’s like, wait a minute, let me start asking questions like: Why does the algorithm behind something cause a certain video to pop up and not others?”

Watch the full session replay on NVIDIA On-Demand.

Diverse Participation Brings Unique Perspectives

Another panel, “Diversity Driving AI Innovation,” was led by Brennon Marcano, CEO of the National GEM Consortium, a nonprofit focused on diversifying representation in science and engineering.

Researchers and scientists from Apple, Amazon Web Services and the University of Utah shared their experiences working in AI, and the value that the perspectives of underrepresented groups can provide in the field.

“Your system on the outside is only as good as the data going in on the side,” said Marcano. “So if the data is homogeneous and not diverse, then the output suffers from that.”

But diversity of datasets isn’t the only problem, said Nashlie Sephus, a tech evangelist at Amazon Web Services AI who focuses on fairness and identifying biases. Another essential consideration is making sure developer teams are diverse.

“Just by having someone on the team with a diverse experience, a diverse perspective and background — it goes a long way. Teams and companies are now starting to realize that,” she said.

The panel described how developers can mitigate algorithmic bias, improve diversity on their teams and find strategies to fairly compensate focus groups who provide feedback on products.

“Whenever you are trying to create something in software that will face the world, the only way you can be precisely coupled to that world is to invite the world into that process,” said Rogelio Cardona-Rivera, assistant professor at the University of Utah. “There’s no way you will be able to be as precise if you leave diversity off the table.”

Watch the discussion here.

Learn more about diversity and inclusion at GTC, and watch additional session replays on NVIDIA On-Demand. Find the GTC keynote address by NVIDIA CEO Jensen Huang here.

The post How Diversity Drives Innovation: Catch Up on Inclusion in AI with NVIDIA On-Demand appeared first on The Official NVIDIA Blog.

Read More

Fighting Fire with Insights: CAPE Analytics Uses Computer Vision to Put Geospatial Data and Risk Information in Hands of Property Insurance Companies

Every day, vast amounts of geospatial imagery are being collected, and yet, until recently, one of the biggest potential users of that trove — property insurers — had made surprisingly little use of it.

Now, CAPE Analytics, a computer vision startup and NVIDIA Inception member, seeks to turn that trove of geospatial imagery into better underwriting decisions, and is applying these insights to mitigate wildfire disasters.

Traditionally, the insurance industry could only rely on historic data for broad swaths of land, combined with an in-person visit. CAPE Analytics can use AI to produce detailed data on the vegetation density, roof material and proximity to surrounding structures. This provides a better way to calculate risk, as well as an avenue to help homeowners take actions to cut it.

“For the first time, insurers can quantify defensible space- the removal of flammable material, such as vegetation from around a home- with granular analytics,” said Kevin van Leer, director of Customer Success at CAPE Analytics. “CAPE allows insurance carriers to identify the vulnerability of a specific home and make recommendations to the homeowner. For example, our recent study shows that cutting back vegetation in the 10 feet surrounding a home is the most impactful action a homeowner can take to reduce their wildfire risk. It’s also much easier to achieve in comparison to the frequently recommended 30-to-100 foot buffer.”

As fire seasons grow longer and deadlier each year, and wildfires are driven by hotter, drier, and faster winds, the risk area widens into newer areas, not found on older maps. This makes up-to-date insights especially crucial.

“What’s unique about this dataset is that it’s very recent, and it’s high resolution,” said Kavan Farzaneh, head of marketing at the Mountain View, Calif., based company. “Using AI, we can analyze it at scale.”

Insights from such analysis extend beyond weather risk to “blue sky,” or day-to-day risk, as well. Whether that means determining the condition of a roof, factoring in new solar panels or detecting the presence of a trampoline, CAPE’s software seeks to optimize the underwriting process by helping insurers make more informed decisions about what policies to write.

And given that the six-year-old company already boasts more than 40 insurance industry customers and is backed by investments from several large insurance carriers, including the Hartford, State Farm and CSAA, CAPE Analytics appears to be on to something.

Creating More Accurate Records

For some time, insurance companies have used aerial imagery for claims verification, such as reviewing storm damage. But CAPE Analytics is converting that imagery into structured data that underwriters can use to streamline their decision-making process. The company is essentially creating more up-to-date property records, which traditionally come from tax assessor offices and other public records sources.

“We zeroed in on property underwriting because there was a void in accuracy, and data tends to be old,” said Busy Cummings, chief revenue officer at CAPE Analytics. “By using AI to tap into this objective ‘source of truth,’ we can improve the accuracy of existing data sources.”

And that means more efficiency for underwriters, who can avoid unnecessary inspections altogether thanks to having access to more current and complete data.

CAPE Analytics obtains its datasets from multiple imagery partners. Human labelers tag some of the data, and the company has trained algorithms that can then identify elements of an aerial image, such as whether a roof is flat or has gables, whether additional structures have been added, or if trees and brush are overgrowing the structure.

The company started training its models on several NVIDIA GPU-powered servers. It has since transitioned the bulk of its training activities to Amazon Web Services P3 instances running NVIDIA V100 Tensor Core GPUs.

Inferencing is running on the NVIDIA Triton inferencing server. CAPE Analytics relies on multiple Triton instances to run its models, with a load balancer distributing inferencing requests, allowing it to scale horizontally to meet shifting client demand. The company’s infrastructure makes it possible to do live inferencing of imagery, with geospatial data converted into actionable structured data in two seconds.

In Pursuit of Scale

Thanks to its membership in NVIDIA Inception, the company has recently been experimenting with the NVIDIA DGX A100 AI system to train larger networks on larger datasets. Jason Erickson, director of platform engineering at CAPE Analytics, said the experience with the DGX A100 has shown “what we could potentially achieve if we had unlimited resources.”

“We’ve been very fortunate to be a part of NVIDIA’s Inception program since 2017, which has afforded us opportunities to test new NVIDIA offerings, including data science GPU and DGX A100 systems, while engaging with the wider NVIDIA community,” said Farzaneh.

CAPE Analytics has every motivation to pursue more scale. Cummings said it has spent the past year focused on expanding from insurance underwriting into the real estate and mortgage markets, where there is demand to integrate property condition data into the tools that determine home values. The company also just announced it’s powering a new automated valuation model based on geospatial data.

With so many potential markets to explore, CAPE Analytics has to keep pushing the envelope.

“Machine learning is such a fast-moving world. Every day there are new papers and new methods and new models,” said Farzaneh. “We’re just trying to stay on the bleeding edge.”

Learn more about NVIDIA’s work with the financial services industry

Feature image credit: Paul Hanaoka on Unsplash.

The post Fighting Fire with Insights: CAPE Analytics Uses Computer Vision to Put Geospatial Data and Risk Information in Hands of Property Insurance Companies appeared first on The Official NVIDIA Blog.

Read More

GFN Thursday Plunges Into ‘Phantom Abyss,’ the New Adventure Announced by Devolver Digital

GFN Thursday returns with a brand new adventure, exploring the unknown in Phantom Abyss, announced just moments ago by Devolver Digital and Team WIBY. The game launches on PC this summer, and when it does, it’ll be streaming instantly to GeForce NOW members.

No GFN Thursday would be complete without new games. And this week is chock full of new-new ones. Five PC games launching this week are joining the GeForce NOW library — including Saints Row: The Third Remastered — plus twelve additional titles.

It’s a good week to have your head in the clouds.

Adventure From the Clouds

Devolver and Team WIBY have an exciting spin on dungeon-delving, and they’re bringing it to GeForce NOW members at release.

“Phantom Abyss and GeForce NOW are a perfect match,” said Josh Sanderson, lead programmer at Team WIBY. “We can’t wait for gamers to explore each dungeon, and now those adventurers can stream from the cloud even if their gaming PC isn’t up to par.”

Members who purchase the game on Steam will be able to play Phantom Abyss across nearly all of their devices, at legendary GeForce quality. Away from your rig but looking for a challenge? Stream to your Android mobile device, Chromebook or any other supported device. You can bring the action with you, wherever you are.

Read on to learn more about Devolver’s freshly announced Phantom Abyss.

Phantom Abyss on GeForce NOW
You’ll face perils in each of Phantom Abyss’ tombs.

Temple of Zoom

Phantom Abyss is a massive asynchronous multiplayer game that casts players into procedurally generated temples and tasks them with retrieving the sacred relics hidden within deadly chambers.

You’ll need to dodge hidden traps, leap across chasms and defeat foes as you navigate each labyrinth to claim the relic at the end. Oh, and you only get one chance per temple. Luckily, you’re joined on your quest by the phantoms of adventurers who tried before you, and you can learn from their run to aid your own.

Phantom Abyss on GeForce NOW
To survive, you’ll need to learn from those who came before you.

Explore the perilous halls and colossal rooms of each temple alongside the phantoms of fallen players that came before you. Use their successes and failures to your advantage to progress deeper than they ever could’ve hoped. Watch and learn from the mistakes of up to 50 phantoms, including your Steam friends who have attempted the same temple, and steal their whips as they fall.

Those stolen whips matter, as they carry minor blessings to aid you on your run. But beware: they’re also cursed, and balancing the banes and boons is key to reaching your ultimate prize.

Phantom Abyss on GeForce NOW
The competition if fierce. Succeed where others have failed, and the treasures of each tomb will be yours.

As you progress on each run, you’ll recover keys from chests to unlock deeper, deadlier sections of the temple that house more coveted relics. The more difficult a relic is to obtain, the greater the reward.

And if you succeed, the legendary relic at the bottom of each temple will be yours, sealing the temple and cementing your legacy.

Exclusive Tips & Tricks

The folks at Devolver shared a few tips and tricks to help you start your adventure in Phantom Abyss.

Before each run you can select between two standard whips and one legendary one that can be acquired in exchange for tokens that you’ve found in previous runs. Select carefully though, as each whip has its own blessings and curses, so it’s important to find the whip that boosts your play style.

The phantoms of ghosts aren’t just fun to watch meet their demise, they can be a helpful guide in your run! Phantoms can set off traps for you, which can be advantageous but also unexpected, so stay on your toes. If a phantom dies in front of you, you can pick up its whip if you find that it’s more beneficial to you on the run.

The guardians are relentless so always keep them in mind the deeper you get into a temple — they tend to cause complete chaos when in the same room as you!

Each temple has different levels and as players move down they can choose to take a more common relic and secure their lesser success or choose a different door to venture further into the Caverns and then the Inferno for even more treasure and glory.

Remastered and Returning

Saints Row The Third Remastered on GeForce NOW
The remastered version of Volition’s classic returns to GeForce NOW.

Saints Row: The Third Remastered is both joining and returning to GeForce NOW. The game launches on Steam later this week and we’ll be working to bring it to GeForce NOW after it’s released. Aligned to the launch, the Epic Games Store version of the game will make its triumphant return to GeForce NOW.

The remastered edition includes enhanced graphics, plus all DLC included – the three expansion mission packs and 30 pieces of DLC from the original version.

Get Your Game On

That’s only the beginning. GFN Thursday means more games, and this week’s list includes four more day-and-date releases. Members can look forward to the following this week:

  • Snowrunner (day-and-date release on Steam, May 17)
  • Siege Survival Gloria Victis (day-and-date release on Steam, May 18)
  • Just Die Already (day-and-date release on Steam, May 20)
  • 41 Hours (day-and-date release on Steam, May 21)
  • Saints Row: The Third Remastered (Steam, May 22 and Epic Games Store)
  • Bad North (Steam)
  • Beyond Good & Evil (Ubisoft Connect)
  • Chess Ultra (Steam)
  • Groove Coaster (Steam)
  • Hearts of Iron 2: Complete (Steam)
  • Monster Prom (Steam)
  • OneShot (Steam)
  • Outlast 2 (Steam)
  • Red Wings: Aces of the Sky (Steam)
  • Space Invaders Extreme (Steam)
  • Warlock: Master of the Arcane (Steam)
  • WRC 8 Fia World Rally Championship (Epic Games Store)

Ready to brave the abyss on GeForce NOW this summer? Join the conversation on Twitter or in the comments below.

The post GFN Thursday Plunges Into ‘Phantom Abyss,’ the New Adventure Announced by Devolver Digital appeared first on The Official NVIDIA Blog.

Read More

Get Outta My Streams, Get Into My Car: Aston Martin Designs Immersive Extended Reality Experience for Customers

Legendary car manufacturer Aston Martin is using the latest virtual and mixed reality technologies to drive new experiences for customers and designers.

The company has worked with Lenovo to use VR and AR to deliver a unique experience that allowed customers to explore its first luxury SUV, the Aston Martin DBX, without physically being in dealerships or offices.

With the Lenovo ThinkStation P620 powered by NVIDIA RTX A6000 graphics, Aston Martin is able to serve up an immersive experience of the Aston Martin DBX. The stunning demo consists of over 10 million polygons, enabling users to view incredibly detailed, photorealistic visuals in virtual, augmented and mixed reality — collectively known as extended reality, or XR.

“It’s our partnership with Lenovo workstations — and in particular, ThinkStation P620 — which has enabled us to take this to the next level,” said Pete Freedman, vice president and chief marketing officer of Aston Martin Lagonda. “Our aim has always been to provide our customers with a truly immersive experience, one that feels like it brings them to the center of the automotive product, and we’ve only been able to do this with the NVIDIA RTX A6000.”

NVIDIA RTX Brings the XR Factor

Customers would typically visit Aston Martin dealerships, attend motor shows or tour their facilities in the U.K. to explore the latest car models. A team would walk them through the design and features in person.

But after everyone started working remotely, Aston Martin decided to take a fresh look at what’s truly possible and investigate options to take the experience directly to customers — virtually.

With the help of teams from Lenovo and Varjo, an XR headset maker, the automaker developed the demo that provides an immersive look at the new Aston Martin DBX using VR and XR.

The experience, which is rendered from the NVIDIA RTX-powered ThinkStation P620, allows virtual participants to enter the environment and see a pixel-perfect representation of the Aston Martin DBX. Customers with XR headsets can explore the virtual vehicle from anywhere in the world, and see details such as the stitching and lettering on the steering wheel, leather and chrome accents, and even the reflections within the paint.

The real-time reflections and illumination in the demo were enabled by Varjo’s pass-through mixed reality technology. The Varjo XR-3’s LiDAR with RGB Depth Fusion using NVIDIA’s Optical Flow gives users the perception that the car is in the room, seamlessly blending the real world and virtual car together.

With the NVIDIA RTX A6000, the immersive demo runs smoothly and efficiently, providing users with high-quality graphics and stunning detail.

“As you dial up the detail, you need high-end GPUs. You need large GPU frame buffers to build the most photorealistic experiences, and that’s exactly what the NVIDIA RTX A6000 delivers,” said Mike Leach, worldwide solution portfolio lead at Lenovo.

The NVIDIA RTX A6000 is based on the NVIDIA Ampere GPU architecture and delivers a 48GB frame buffer. This allows teams to create high-fidelity VR and AR experiences with consistent framerates.

Aston Martin will expand its use of VR and XR to enhance internal workflows, as well. With this new experience, the design teams can work in virtual environments and iterate more quickly earlier in the process, instead of creating costly models.

Watch Lenovo’s GTC session to hear more about Aston Martin’s story.

Learn more about NVIDIA RTX and how our latest technology is powering the most immersive environments across industries.

The post Get Outta My Streams, Get Into My Car: Aston Martin Designs Immersive Extended Reality Experience for Customers appeared first on The Official NVIDIA Blog.

Read More

AI Researcher Explains Deep Learning’s Collision Course with Particle Physics

For a particle physicist, the world’s biggest questions — how did the universe originate and what’s beyond it — can only be answered with help from the world’s smallest building blocks.

James Kahn, a consultant with German research platform Helmholtz AI and a collaborator on the global Belle II particle physics experiment, uses AI and the NVIDIA DGX A100 to understand the fundamental rules governing particle decay.

Kahn spoke with NVIDIA AI Podcast host Noah Kravitz about the specifics of how AI is accelerating particle physics.

He also touched on his work at Helmholtz AI. Khan helps researchers in fields spanning medicine to earth sciences apply AI to the problems they’re solving. His wide-ranging career — from particle physicist to computer scientist — shows how AI accelerates every industry.

Key Points From This Episode:

  • The nature of particle physics research, which requires numerous simulations and constant adjustments, requires massive AI horsepower. Kahn’s team used the DGX A100 to reduce the time it takes to optimize simulations from a week to roughly a day.
  • The majority of Kahn’s work is global — at Helmholtz AI, he collaborates with researchers from Beijing to Tel Aviv, with projects located anywhere from the Southern Ocean to Spain. And at the Belle II experiment, Kahn is one of more than 1,000 researchers from 26 countries.

Tweetables:

“If you’re trying to simulate all the laws of physics, that’s a lot of simulations … that’s where these big, powerful machines come into play.” — James Kahn [6:02]

“AI is seeping into every aspect of research.” — James Kahn [16:37]

You Might Also Like:

Speed of Light: SLAC’s Ryan Coffee Talks Ultrafast Science

Particle physicist Ryan Coffee, senior staff scientist at the SLAC National Accelerator Laboratory, talks about how he is putting deep learning to work.

A Conversation About Go, Sci-Fi, Deep Learning and Computational Chemistry

Olexandr Isayev, an assistant professor at the UNC Eshelman School of Pharmacy at the University of North Carolina at Chapel Hill, explains how deep learning, abstract strategy board game Go, sci-fi and computational chemistry intersect.

How Deep Learning Can Accelerate the Quest for Cheap, Clean Fusion Energy

William Tang, principal research physicist at the Princeton Plasma Physics Laboratory, is one of the world’s foremost experts on how the science of fusion energy and HPC intersect. He talks about how he sees AI enabling the quest to deliver fusion energy.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Researcher Explains Deep Learning’s Collision Course with Particle Physics appeared first on The Official NVIDIA Blog.

Read More

From Gaming to Enterprise AI: Don’t Miss NVIDIA’s Computex 2021 Keynote

NVIDIA will deliver a double-barrelled keynote packed with innovations in AI, the cloud, data centers and gaming at Computex 2021 in Taiwan, on June 1.

NVIDIA’s Jeff Fisher, senior vice president of GeForce gaming products, will discuss how NVIDIA is addressing the explosive growth in worldwide gaming. And Manuvir Das, head of enterprise computing at the company, will talk about democratizing AI to put more AI capabilities within reach of more enterprises.

Hosted by the Taiwan External Trade and Development Council, Computex has long been one of the world’s largest enterprise and consumer trade shows. Alongside its partners, NVIDIA has introduced a host of innovations at Computex over the years.

This year’s show will be both live and digital, giving technology enthusiasts around the world an opportunity to watch. You can tune in to the keynote, titled “The Transformational Power of Accelerated Computing, from Gaming to the Enterprise Data Center,” from our event landing page, or from our YouTube channel starting at 1 p.m. Taiwan time on June 1 (10 p.m. Pacific time on May 31).

Besides the keynote, NVIDIA will hold three talks at Computex forums.

Ali Kani, vice president and general manager of automotive at NVIDIA, will talk about “Transforming the Transportation Industry with AI” at the Future Car Forum on June 1, from 11 a.m. to 1 p.m. Taiwan time.

Jerry Chen, NVIDIA’s head of global business development for manufacturing and industrials, will discuss “The Promise of Digital Transformation: How AI-Infused Industrial Systems Are Rising to Meet the Challenges” at the AIoT Forum on June 2, at 11 a.m. Taiwan time.

And Richard Kerris, head of worldwide developer relations and general manager of NVIDIA Omniverse, will deliver a talk on the topic of “The Metaverse Begins: NVIDIA Omniverse and a Future of Shared Worlds,” on June 3 from 3:30 to 4 p.m. Taiwan time.

Key image credit: Arlene Hu, some rights reserved

The post From Gaming to Enterprise AI: Don’t Miss NVIDIA’s Computex 2021 Keynote appeared first on The Official NVIDIA Blog.

Read More