What Is Zero Trust?

For all its sophistication, the Internet age has brought on a digital plague of security breaches. The steady drumbeat of data and identity thefts spawned a new movement and a modern mantra that’s even been the subject of a U.S. presidential mandate — zero trust.

So, What Is Zero Trust?

Zero trust is a cybersecurity strategy for verifying every user, device, application and transaction in the belief that no user or process should be trusted.

That definition comes from the NSTAC report, a 56-page document on zero trust compiled in 2021 by the U.S. National Security Telecommunications Advisory Committee, a group that included dozens of security experts led by a former AT&T CEO.

In an interview, John Kindervag, the former Forrester Research analyst who created the term, noted that he defines it this way in his Zero Trust Dictionary: Zero trust is a strategic initiative that helps prevent data breaches by eliminating digital trust in a way that can be deployed using off-the-shelf technologies that will improve over time.

What Are the Basic Tenets of Zero Trust?

In his 2010 report that coined the term, Kindervag laid out three basic tenets of zero trust. Because all network traffic should be untrusted, he said users must:

  • verify and secure all resources,
  • limit and strictly enforce access control, and
  • inspect and log all network traffic.

That’s why zero trust is sometimes known by the motto, “Never Trust, Always Verify.”

How Do You Implement Zero Trust?

As the definitions suggest, zero trust is not a single technique or product, but a set of principles for a modern security policy.

In its seminal 2020 report, the U.S. National Institute for Standards and Technology (NIST) detailed guidelines for implementing zero trust.

Zero Trust architecture from NIST

Its general approach is described in the chart above. It uses a security information and event management (SIEM) system to collect data and continuous diagnostics and mitigation (CDM) to analyze it and respond to insights and events it uncovers.

It’s an example of a security plan also called a zero trust architecture (ZTA) that creates a more secure network called a zero trust environment.

But one size doesn’t fit all in zero trust. There’s no “single deployment plan for ZTA [because each] enterprise will have unique use cases and data assets,” the NIST report said.

Five Steps to Zero Trust

The job of deploying zero trust can be boiled down to five main steps.

It starts by defining a so-called protect surface, what users want to secure. A protect surface can span systems inside a company’s offices, the cloud and the edge.

From there, users create a map of the transactions that typically flow across their networks and a zero trust architecture to protect them. Then they establish security policies for the network.

Finally, they monitor network traffic to make sure transactions stay within the policies.

Five step process for zero trust

Both the NSTAC report (above) and Kindervag suggest these same steps to create a zero trust environment.

It’s important to note that zero trust is a journey not a destination. Consultants and government agencies recommend users adopt a zero trust maturity model to document an organization’s security improvements over time.

The Cybersecurity Infrastructure Security Agency, part of the U.S. Department of Homeland Security, described one such model (see chart below) in a 2021 document.

Zero Trust maturity model from CISA

In practice, users in zero trust environments request access to each protected resource separately. They typically use multi-factor authentication (MFA) such as providing a password on a computer, then a code sent to a smartphone.

The NIST report lists ingredients for an algorithm (below) that determines whether or not a user gets access to a resource.

NIST algorithm for zero trust access

“Ideally, a trust algorithm should be contextual, but this may not always be possible,” given a company’s resources, it said.

Some argue the quest for an algorithm to measure trustworthiness is counter to the philosophy of zero trust. Others note that machine learning has much to offer here, capturing context across many events on a network to help make sound decisions on access.

The Big Bang of Zero Trust

In May 2021, President Joe Biden released an executive order mandating zero trust for the government’s computing systems.

The order gave federal agencies 60 days to adopt zero trust architectures based on the NIST recommendations. It also called for a playbook on dealing with security breaches, a safety board to review major incidents — even a program to establish cybersecurity warning labels for some consumer products.

It was a big bang moment for zero trust that’s still echoing around the globe.

“The likely effect this had on advancing zero trust conversations within boardrooms and among information security teams cannot be overstated,” the NSTAC report said.

What’s the History of Zero Trust?

Around 2003, ideas that led to zero trust started bubbling up inside the U.S. Department of Defense, leading to a 2007 report. About the same time, an informal group of industry security experts called the Jericho Forum coined the term “de-perimeterisation.”

Kindervag crystalized the concept and gave it a name in his bombshell September 2010 report.

The industry’s focus on building a moat around organizations with firewalls and intrusion detection systems was wrongheaded, he argued. Bad actors and inscrutable data packets were already inside organizations, threats that demanded a radically new approach.

Security Goes Beyond Firewalls

From his early days installing firewalls, “I realized our trust model was a problem,” he said in an interview. “We took a human concept into the digital world, and it was just silly.”

At Forrester, he was tasked with finding out why cybersecurity wasn’t working. In 2008, he started using the term zero trust in talks describing his research.

After some early resistance, users started embracing the concept.

“Someone once told me zero trust would become my entire job. I didn’t believe him, but he was right,” said Kindervag, who, in various industry roles, has helped hundreds of organizations build zero trust environments.

An Expanding Zero Trust Ecosystem

Indeed, Gartner projects that by 2025 at least 70% of new remote access deployments will use what it calls zero trust network access (ZTNA), up from less than 10% at the end of 2021. (Gartner, Emerging Technologies: Adoption Growth Insights for Zero Trust Network Access, G00764424, April 2022)

That’s in part because the COVID lockdown accelerated corporate plans to boost security for remote workers. And many firewall vendors now include ZTNA capabilities in their products.

Market watchers estimate at least 50 vendors from Appgate to Zscaler now offer security products aligned with the zero trust concepts.

AI Automates Zero Trust

Users in some zero trust environments express frustration with repeated requests for multi-factor authentication. It’s a challenge that some experts see as an opportunity for automation with machine learning.

For example, Gartner suggests applying analytics in an approach it calls continuous adaptive trust. CAT (see chart below) can use contextual data — such as device identity, network identity and geolocation — as a kind of digital reality check to help authenticate users.

Gartner on MFA to CAT for zero trust journey
Gartner lays out zero trust security steps. Source: Gartner, Shift Focus From MFA to Continuous Adaptive Trust, G00745072, December 2021.

In fact, networks are full of data that AI can sift in real time to automatically enhance security.

“We do not collect, maintain and observe even half the network data we could, but there’s intelligence in that data that will form a holistic picture of a network’s security,” said Bartley Richardson, senior manager of AI infrastructure and cybersecurity engineering at NVIDIA.

Human operators can’t track all the data a network spawns or set policies for all possible events. But they can apply AI to scour data for suspicious activity, then respond fast.

“We want to give companies the tools to build and automate robust zero trust environments with defenses that live throughout the fabric of their data centers,” said Richardson, who leads development on NVIDIA Morpheus, an open AI cybersecurity framework.

NVIDIA Morpheus for zero trust

NVIDIA provides pretrained AI models for Morpheus, or users can choose a model from a third party or build one themselves.

“The backend engineering and pipeline work is hard, but we have expertise in that, and we can architect it for you,” he said.

It’s the kind of capability experts like Kindervag see as part of the future for zero trust.

“Manual response by security analysts is too difficult and ineffective,” he wrote in a 2014 report. “The maturity of systems is such that a valuable and reliable level of automation is now achievable.”

To learn more about AI and zero trust, read this blog or watch the video below.

The post What Is Zero Trust? appeared first on NVIDIA Blog.

Read More

Feel the Need … for Speed as ‘Top Goose’ Debuts In the NVIDIA Studio

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

You can be my wing-wing anytime.

This week In the NVIDIA Studio takes off with the debut of Top Goose, a short animation created with Omniverse Machinima and inspired by one of the greatest fictional pilots to ever grace the big screen.

The project was powered by PCs using the same breed of GPU that has produced every Best Visual Effects nominee at the Academy Awards for 14 years: multiple systems with NVIDIA RTX A6000 GPUs and an NVIDIA Studio laptop — the Razer Blade 15 with a GeForce RTX 3070 Laptop GPU.

The team took Top Goose from concept to completion in just two weeks. It likely would’ve taken at least twice as long without the remote collaboration NVIDIA Omniverse offers NVIDIA RTX and GeForce RTX users.

 

Built to showcase the #MadeinMachinima contest, the inspiration was simple. One of the NVIDIANs involved in the project, Dane Johnston, succinctly noted, “How do you get a midcentury legionnaire on an aircraft carrier and what would he be doing? He’d be getting chased by a goose, of course.”

Ready to Take-Off

Johnston and fellow NVIDIANs Dave Tyner, Matthew Harwood and Terry Naas began the project by prepping models for the static assets in Autodesk 3ds Max. Several of the key models came from TurboSquid by Shutterstock, including the F14 fighter jet, aircraft carrier, goose and several props.

High-quality models such as the F14 fighter jet, courtesy of TurboSquid by Shutterstock, are available to all Omniverse users.

TurboSquid has a huge library of 3D models to begin creating within Omniverse. Simply drag and drop models into Omniverse and start collaborating with team members — regardless of the 3D application they’re using or where they’re physically located.

Tyner could easily integrate 3D models he already owned by simply dropping them into the scene from the new Asset Store browser in Omniverse.

Texture details were added within Omniverse in real time using Adobe Photoshop.

The team worked seamlessly between apps within Omniverse, in real time, including Adobe Photoshop.

From there, Adobe Photoshop was used to edit character uniforms and various props within the scene, including the Top Goose badge at the end of the cinematic.

Animators, Mount Up!

Once models were ready, animation could begin. The team used Reallusion’s iClone Character Creator Omniverse Connector to import characters to Machinima.

Omniverse-ready USD animations from Reallusion ActorCore were dragged and dropped into the Omniverse Machinima content browser for easy access.

 

The models and animations were brought into Machinima by Tyner, where he used the retargeting function to instantly apply the animations to different characters, including the top knight from Mount & Blade II: Bannerlord — one of the hundreds of assets included with Omniverse.

Tyner, a generalist 3D artist, supplemented the project by creating custom animations from motion capture using an Xsens suit that was exported to FBX. Using a series of Omniverse Connectors, he brought the FBX files into Autodesk 3ds Max and ran a quick script to create a rudimentary skin.

Then, Tyner sent the skinned character and animation into Autodesk Maya for USD skeleton export to Machinima, using the Autodesk Maya Connector. The animation was automatically retargeted onto the main character inside Machinima. Once the data was captured, the entire mocap workflow took only a few minutes using NVIDIA Studio tools.

If Tyner didn’t have a motion-capture suit, he could have used Machinima’s AI Pose Estimation — a tool within Omniverse that lets anyone with a camera capture movement and create a 3D animation.

Static objects were all animated in Machinima with the Curve Editor and Sequencer. These tools allowed the team to animate anything they wanted, exactly how they wanted. For instance, the team animated the fighter jet barrel rolls with gravity keyed on a y-axis — allowing gravity to be turned on and off.

This technique, coupled with NVIDIA PhysX, also allowed the team to animate the cockpit scene with the flying bread and apples simply by turning off the gravity. The objects in the scene all obeyed the laws of physics and flew naturally without any manual animation.

The team collaborates virtually to achieve realistic animations using the Omniverse platform.

Animating the mighty wings of the goose was no cheap trick. While some of the animations were integrated as part of the asset from TurboSquid, the team collaborated within Omniverse to animate the inverted scenes.

Tyner used Omniverse Cloud Simple Share Early Access to package and send the entire USD project to Johnston and Harwood, NVIDIA’s resident audiophile. Harwood added sounds like the fly-bys and goose honks. Johnston brought the Mount & Blade II: Bannerlord character to life by recording custom audio and animating the character’s face with Omniverse Audio2Face.

Traditional audio workflows usually involve multiple pieces of audio recordings sent piecemeal to the animators. With Simple Share, Tyner packaged and sent the entire USD project to Harwood, who was able to add audio directly to the file and return it with a single click.

Revvin’ Up the Engine

Working in Omniverse meant the team could make adjustments and see the changes, with full-quality resolution, in real time. This saved the team a massive amount of time by not having to wait for single shots to render out.

The 3D artist team works together to finish the scene in Omniverse Machinima and Audio2Face.

With individuals working hundreds of miles apart, the team leveraged Omniverse’s collaboration capabilities with Omniverse Nucleus. They were able to complete set dressing, layout and lighting adjustments in a single real-time jam session.

 

The new constraints system in Machinima was integral to the camera work. Tyner created the shaky camera that helps bring the feeling of being on an aircraft carrier by animating a shaking ball in Autodesk 3ds Max, bringing it in via its Omniverse Connector, and constraining a camera to it using OmniGraph.

Equally important are the new Curve Editor and Sequencer. They gave the team complete intuitive control of the creative process. They used Sequencer to quickly and easily choreograph animated characters, lights, constraints and cameras — including field of view and depth of field.

With all elements in place, all that was left was the final render — conveniently and quickly handled using the Omniverse RTX renderer and without any file transfers in Omniverse Nucleus.

Tyner noted, “This is the first major project that I’ve done where I was never blocked. With Omniverse, everything just worked and was really easy to use.”

Not only was it easy to use individually, but Omniverse, part of the NVIDIA Studio suite of software, let this team of artists easily collaborate while working in and out of various apps from multiple locations.

Top Prizes in the #MadeinMachinima Contest

Top Goose is a showcase for #MadeinMachinima. The contest, which is currently running and closes June 27, asks artists to build and animate a cinematic short story with the Omniverse Machinima app for a chance to win RTX-accelerated NVIDIA Studio laptops.

RTX creators everywhere can remix and animate characters from Squad, Mount & Blade II: Bannerlord, Shadow Warrior 3, Post Scriptum, Beyond the Wire and Mechwarrior Mercenaries 5 using the Omniverse Machinima app.

Experiment with the AI-enabled tools like Audio2Face for instant facial animation from just an audio track; create intuitively with PhysX-powered tools to help you build as if building in reality; or add special effects with Blast for destruction and Flow for smoke and fire. You can use any third-party tools to help with your workflow, just assemble and render your final submission using Omniverse Machinima.

Learn more about NVIDIA Omniverse, including tips, tricks and more on the Omniverse YouTube channel. For additional support, explore the Omniverse forums or join the Discord server to chat with the community. Check out the Omniverse Twitter, Instagram and Medium page to stay up to date.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access a wide range of tutorials on the Studio YouTube channel and get updates in your inbox by subscribing to the Studio newsletter.

The post Feel the Need … for Speed as ‘Top Goose’ Debuts In the NVIDIA Studio appeared first on NVIDIA Blog.

Read More

Vision in the Making: Andrew Ng’s Startup Automates Factory Inspection

Computer vision specialist Landing AI has a unique calling card: Its co-founder and CEO is a tech rock star.

At Google Brain, Andrew Ng became famous for showing how deep learning could recognize cats in a sea of images with uncanny speed and accuracy. Later, he founded Coursera, where his machine learning courses have attracted nearly five million students.

Today, Ng is best known for his views on data-centric AI — that improving AI performance now requires more focus on datasets and less on refining neural network models. It’s a philosophy coded into Landing AI’s flagship product, LandingLens.

Founded in 2017, Landing AI counts among its users Foxconn, StanleyBlack&Decker and automotive supplier Denso. They and others have applied deep learning to improve their efficiency and reduce costs.

A Classification Challenge

A chip maker with manufacturing plants around the globe, was one of the first to try LandingLens. It wanted to use deep learning to improve throughput and yield of the wafers that carry chips through its fabs.

Like all chip makers, “they have a lot of visual inspection machines on the fab floor that scan wafers at various steps — and they do a good job finding anomalies — but they didn’t do as well classifying the things they found into types of defects,” said Quinn Killough, Landing’s liaison to the customer.

And like many chip makers, it had tried a variety of software programs for classification. “But the solutions needed to be fine-tuned for each product and with more than 100 products, the investment wasn’t worth it,” said Killough, who has a background in computer vision and manufacturing.

AI Automates Inspection

Then the customer applied AI with LandingLens. It’s designed to handle the end-to-end MLOps process — from collecting data to training and deploying models — then manage the ongoing process of refining the models, and especially the data, to enhance results.

Although it’s still early days for the deployment, the product and its data-centric approach have already helped the chip maker reduce costs.

“The primary engineer driving the project said he sees deep learning as transformative and wants to scale it out across his facility and get other plants to adopt it,” said Killough.

Inspectors in the Cloud

The chip maker used LandingLens on NVIDIA V100 GPUs in a cloud-based service that runs inference on hundreds of thousands of images a day.

“We weren’t sure of the throughput capabilities at the beginning, but now it’s clear it can handle that and a lot more,” said Killough.

The same service can train a new classification model in less than a minute using about 50 defect images so users can iterate rapidly.

“On the training side, it’s very important for our tool to feel snappy so our customers can troubleshoot problems and experiment with solutions,” he said.

Taking AI to the Edge

Now the company is taking the AI work to the factory floor with a new product, LandingEdge, which is in beta tests with several customers.

It captures images from cameras, then runs inference on industrial PCs equipped with NVIDIA Jetson AGX Xavier modules. Insights from that work feed directly to controllers that operate robotic arms, conveyor belts and other production systems.

“We aim to improve quality controls, creating a flywheel effect for fast and iterative AI processes,” said Jason Chan, product manager for LandingEdge.

Accelerating a Startup’s Growth

To get early access to the latest technology and expertise, Landing AI joined the NVIDIA Metropolis program, geared for companies using AI vision to make spaces and operations safer and more efficient.

It’s still early days for the company and data-centric AI which Ng believes may be one of the biggest tech shifts in this decade

To learn more, watch a GTC session (free with registration) where Ng describes the status and outlook for the data-centric AI movement.

The post Vision in the Making: Andrew Ng’s Startup Automates Factory Inspection appeared first on NVIDIA Blog.

Read More

GFN Thursday Jumps Into June With 25 New Games Coming This Month

Celebrate the onset of summer this GFN Thursday with 25 more games joining the GeForce NOW library, including seven additions this week. Because why would you ever go outside?

Looking to spend the summer months in Space Marine armor? Games Workshop is kicking off its Warhammer Skulls event for its sixth year, with great discounts on the Warhammer franchise on GeForce NOW.

Snag Sales With the Warhammer Skulls Festival

Warhammer Skulls Fest on GeForce NOW
Get many of your favorite Warhammer titles streaming on GeForce NOW on sale this week.

Games Workshop’s Warhammer Festival brings week-long discounts on several Warhammer games that are available to stream on GeForce NOW.

Gamers can grab the following titles on sale:

Check out the Warhammer Community to get all the details, including links to sales.

Jump Into June

LEAP on GeForce NOW
Arm yourself to the teeth in LEAP, a fast-paced, multiplayer first-person shooter featuring epic battles with up to 60 players.

Twenty-five more games are joining the cloud this June. Get started with this week’s seven additions:

  • LEAP (New release on Steam)
  • Souldiers (New release on Steam)
  • Twilight Wars: Declassified (New release on Steam)
  • ABRISS – build to destroy (Steam)
  • ANNO: Mutationem (Steam)
  • Kathy Rain: Director’s Cut (Steam)
  • Star Conflict (Steam)

Also coming this month:

  • MythBusters: The Game – Crazy Experiments Simulator (New release on Steam, June 8)
  • POSTAL: Brain Damaged (New release on Steam, June 9)
  • Pro Cycling Manager 2022 (New release on Steam, June 9)
  • Tour de France 2022 (New release on Steam, June 9)
  • Builder Simulator (New release on Steam, June 9)
  • Chivalry 2 (New release on Steam, June 12)
  • Starship Troopers – Terran Command (New release on Steam and Epic Games Store, June 16)
  • Airborne Kingdom (Steam)
  • Core Keeper (Steam)
  • Fishing: North Atlantic (Steam)
  • Immortal Life (Steam)
  • The Legend of Heroes: Trails of Cold Steel II (Steam)
  • KeyWe (Steam)
  • King Arthur: Knight’s Tale (Steam)
  • Mechwarrior 5: Mercenaries (Steam)
  • No Straight Roads: Encore Edition (Steam)
  • Silt (Steam and Epic Games Store)
  • SimAirport (Steam)

More From May

On top of the 27 games announced in May, another nine joined over the month:

On July 1, God of War (Steam, Epic Games Store) will be removed from the GeForce NOW library. However, it will remain available for those who have played the game at least once on GeForce NOW.

As part of the GeForce NOW opt-in process, some games may continue to be available to members on a legacy basis. This will allow members who have started playing a game at least once on GeForce NOW to continue playing it, even after the game has been removed for users who have not played it.

Finally, in the spirit of summer, we’ve got to know your vacation plans. Let us know your answer on Twitter or in the comments below.

The post GFN Thursday Jumps Into June With 25 New Games Coming This Month appeared first on NVIDIA Blog.

Read More

Solving the World’s Biggest Challenges, Together

Gamers know NVIDIA powers great gaming experiences. Researchers know NVIDIA speeds world-changing breakthroughs. Businesses know us for the AI engines transforming their industries.

And NVIDIA employees know the company as one of the best places to work on the planet.

More people than ever have a piece of NVIDIA. Roboticists, visual artists, data scientists — all sorts of innovators and creators rely on the company’s technology. And that’s only natural: NVIDIA’s the largest startup on Earth, growing to 25,000 employees from 10,000 a few years ago.

But as NVIDIA spills out in all directions it’s more important than ever to connect all these pieces, these people who may know our products, but don’t know one another.

That’s why we’re launching a campaign this week to bring all these elements together. To reflect back to the entertainers and entrepreneurs, researchers and scientists, developers and designers the staggering body of work we’ve built together.

It’s going to be quite a conversation. Not just because it’s comprehensive, but because it’s coherent.

The same GPU technology that powers the Nintendo Switch has proven the existence of the gravity waves Einstein predicted a century ago.

The parallel computing power harnessed by NVIDIA’s CUDA platform is key not just to Oscar-winning special effects, but to a new generation of medical breakthroughs.

And the huge leaps in computing power unleashed by innovations in silicon, software and systems created at NVIDIA are turning data centers into engines of business innovation and imbuing supercomputers with the power to simulate the planet itself, for the benefit of those of us who would become its stewards.

These stories don’t just cover the breadth of what’s been accomplished. They point to possibilities, new places where each of these endeavors intersect. These spillovers are anything but happy accidents; they’re by design, and they’ve always been the soul of NVIDIA.

So think of this as an introduction. Not just to the NVIDIA story, but to each other. And with more people than ever contributing to this story we all share, this body of work, there can be no doubt that the best part of the NVIDIA story is still to come.

Visit NVIDIA’s “About Us,” page, or click here for more on what NVIDIA, developers and customers have built together.

 

Featured image credit: image courtesy of Accuray

The post Solving the World’s Biggest Challenges, Together appeared first on NVIDIA Blog.

Read More

The Closer: Machine Learning Helps Banks, Buyers Finalize Real Estate Transactions

The home-buying process can feel like an obstacle course — finding the perfect place, putting together an offer and, the biggest hurdle of all, securing a mortgage.

San Francisco-based real-estate technology company Doma is helping prospective homeowners clear that hurdle more quickly with the support of AI. Its machine learning models accelerate properties through the title search, underwriting and closing processes, helping complete home transactions up to 15 percent faster.

“There’s a lot of paperwork involved in this process,” said Brian Holligan, director of data science at Doma. “The better we are at using machine learning to identify different document types and extract relevant information, the faster and more seamless the process can be.”

Doma uses machine learning to identify different types of real estate documents and extract insights from those files. It’s also developing natural language understanding models to help everyone involved in a real estate transaction — from loan officers to real estate agents to homebuyers — rapidly interpret the numerous requests and inquiries that typically occur during the process.

Since its beginnings in 2016, Doma has accelerated over 100,000 real estate transactions with machine learning.

The company uses machine learning models — both transformer-based NLP tools and convolutional neural networks for computer vision — that rely on NVIDIA V100 Tensor Core GPUs through Microsoft Azure for model training.

“Working with a remote team, it’s nice to have the flexibility of GPUs in the cloud,” said Keesha Erickson, a data scientist at Doma. “We can spin up the right-sized machines based on the project or task at hand. If there’s a larger-scale model with a longer run time, we can grab the GPUs that are appropriate for the time constraints we’re under.”

Doma Machine Intelligence Delves Into Real Estate Docs

Once a seller and buyer have agreed on a purchase price for a home, they enter into a contract. But typically, a few weeks pass before keys actually change hands — a period known as escrow. During this process, the buyer’s mortgage loan is finalized and a title company investigates the home’s ownership, balance fees and tax history.

Doma’s GPU-accelerated machine learning models speed the title examination process by analyzing property records and mortgage files to help identify any risks that could disrupt the transaction.

Like other fields such as drug discovery or architecture, the real estate industry has jargon that a general NLP model may not be able to interpret. To tailor its AI to this lingo, Doma fine-tunes a suite of models, including BERT-based models, using a corpus of proprietary real estate data.

Doma’s technology also uses computer vision models to analyze older real estate documents. Many records come from county courthouses and clerk’s offices — and depending on the age of the home, records can be incredibly low-quality scans of paper documents that are decades old.

Doma machine learning engineer Juhi Chandalia, who works on machine learning models for this kind of document processing, found that using NVIDIA GPUs for inference cut the team’s time to insights by 4x, to under a minute.

“I’ve started training models on CPU instead of GPU before, and realized it would take weeks to complete,” Chandalia said. “My team relies on NVIDIA GPUs because otherwise, by the time we finished training and testing our machine learning models, they’d be out of date.”

Doma offers application programming interfaces, predefined integrations and custom versions of its platform to its lender partners. The company has teamed up with major mortgage lenders around the U.S., including Chase, Homepoint Financial, PennyMac and Sierra Pacific Mortgage, to accelerate the mortgage transaction and refinance process.

The company is also bringing some of its machine learning tools to individuals — real estate agents, buyers and sellers — to further streamline the complex process for all parties in a real estate transaction.

Learn more about how AI can help predict mortgage delinquencies, improve credit risk management and power banks of the future.

Subscribe to the latest financial services news from NVIDIA

The post The Closer: Machine Learning Helps Banks, Buyers Finalize Real Estate Transactions appeared first on NVIDIA Blog.

Read More

Fantastical 3D Creatures Roar to Life ‘In the NVIDIA Studio’ With Artist Massimo Righi

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

The year of the tiger comes into focus this week In the NVIDIA Studio, which welcomes 3D creature artist Massimo Righi.

An award-winning 3D artist with two decades of experience in the film industry, Righi has received multiple artist-of-the-month accolades and features in top creative publications. He’s also worked with clients including Autodesk, Discovery Channel, Google, Netflix and the World Wildlife Fund.

A sampling of Righi’s impressive 3D animal collection.

A native of Italy, Righi now lives in Thailand, a move he says was artistically inspired as he’s an avid lover of animals and nature. This is evident from his stunning portfolio of animal renders renowned for their lifelike photorealism and individualistic tone.

The story behind Waiting for the Year of the Tiger is a personal one for the artist.

“In my illustrations, I try to show the iconic image of that specific animal, or at least what’s my personal idea of that,” Righi says. “It’s a mix of my childhood vision with added real-life experience.”

Righi’s dedication to the craft means he often visits animal sanctuaries to observe and take pictures and video footage to use as reference.

The creative process begins in Autodesk Maya.

Righi’s creative workflow begins in Autodesk Maya on a Lenovo laptop powered by a GeForce RTX 3080 GPU. He creates a base model and unwrapping UV, translating 3D mesh into 2D data so a 2D texture can be wrapped around it.

Righi then uses custom brushes in ZBrush to sculpt the tiger in finer detail. With a mix of photos and custom brushes, Righi creates the main textures in Adobe Photoshop, using AI-powered Enhance Details technology, one of over a dozen GPU-accelerated features, to sharpen photo details before returning to Maya and applying new textures to the 3D model.

Deploying spline primitives across 10 different sections unlocks profound realism.

Righi prioritizes achieving a highly photorealistic look with detailed, lifelike fur, created with the XGen interactive groom editor within Maya. “Before starting, it’s very important to study the fur flow so that I can decide how many fur descriptions I would need,” Righi says. “The tiger, for example, has been made with 10 descriptions, such as main body, head, cheeks and whiskers.”

Righi’s fur creation technique adds stunning realism to his virtual animals.

Righi uses spline primitives — predefined shapes which provide a method for freehand drawing — manually placing and shaping the guides. Righi notes, “I usually create three or four clump modifiers and break them up with a few noise cuts, stray and maybe curl modifiers.”

The artist tinkers and tests in real time until he meets the desired output, then bakes the fur consisting of millions of spline primitives. This process is accelerated by a factor of 6x with his GeForce RTX 3080 GPU, saving valuable time.

Getting the lighting just right.

With the model in a good place, Righi animates the scene in 3D using Blendshapes to interpolate the different sets of geometry. According to Righi, getting the little details right is critical to capturing the essence of lifelike movement. “Researching anatomy is equally important,” he says. “To get the right shapes and sculpt every detail, you need to take your time and study.”

Righi begins to test scene lighting. Here, the GPU-accelerated viewport enables incredibly smooth, interactive 3D modeling — another benefit of using the GeForce RTX 3080 GPU. Righi can tinker with the lighting, shadows, animations and more in real time, without having to wait for his system to catch up.

Waiting for the Year of the Tiger is complete.

Once final renders are ready to export, AI denoising with the default Autodesk Arnold renderer produces incredibly photorealistic renders that export up to 6x faster on Righi’s GPU-accelerated laptop than on a comparable unit with integrated graphics.

And like that, the year of the tiger is upon us.

Righi’s NVIDIA Studio-powered creative zone.

View Righi’s portfolio on ArtStation and his 3D models marketplace to see more of his work.

Animals and Creatures Inspire In the NVIDIA Studio

Talented artists who share Righi’s passion for design join us In the NVIDIA Studio. They share their artist and character-inspired 3D journeys, passing along valuable tips and tricks through tutorials.

Ringling College of Art and Design Professor Ana Carolina injects Greek mythology and lore in her fantasy-style Studio Session, How to Model & Render A 3D Fantasy Hippocampus Creature, offering an inside look at sculpting in Zbrush and applying colors in Adobe Substance 3D Painter.

Character concept artist Tadej Blažič’ shares his Blender tips and tricks in a two-part Studio Session tutorial. Part 1, Blender Tutorial: Create a Cute 3D Character Animation Part 1: Character Concept, covers 3D character animations, while Part 2, Modeling & Hair, demonstrates hair and fur techniques similar to Righi’s.

Freelancer Wesley Trankle’s six-part Studio Session tutorial, How to Create a 3D Animation Video, provides an in-depth look at the 3D journey. Videos cover a wide range of topics: Modeling Main Character, Character Rigging, Creating Environment, Animation, Texturing & Materials & Lighting and Final Touches

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Fantastical 3D Creatures Roar to Life ‘In the NVIDIA Studio’ With Artist Massimo Righi appeared first on NVIDIA Blog.

Read More

NVIDIA Accelerates AI, Digital Twins, Quantum Computing and Edge HPC at ISC 2022

Researchers grappling with today’s grand challenges are getting traction with accelerated computing, as showcased at ISC, Europe’s annual gathering of supercomputing experts.

Some are building digital twins to simulate new energy sources. Some use AI+HPC to peer deep into the human brain.

Others are taking HPC to the edge with highly sensitive instruments or accelerating simulations on hybrid quantum systems, said Ian Buck, vice president of accelerated computing at NVIDIA, at an ISC special address in Hamburg.

Delivering 10 AI Exaflops

For example, a new supercomputer at Los Alamos National Laboratory (LANL) called Venado will deliver 10 exaflops of AI performance to advance work in areas such as materials science and renewable energy.

LANL researchers target 30x speedups in their computational multi-physics applications with NVIDIA GPUs, CPUs and DPUs in the system, named after a peak in northern New Mexico.

LANL's Venado will use NVIDIA Grace, Grace Hopper and BlueField DPUs

Venado will use NVIDIA Grace Hopper Superchips to run workloads up to 3x faster than prior GPUs. It also packs NVIDIA Grace CPU Superchips to provide twice the performance per watt of traditional CPUs on a long tail of unaccelerated applications.

BlueField Gathers Momentum

The LANL system is among the latest of many around the world to embrace NVIDIA BlueField DPUs to offload and accelerate communications and storage tasks from host CPUs.

Similarly, the Texas Advanced Computing Center is adding BlueField-2 DPUs to the NVIDIA Quantum InfiniBand network on Lonestar6. It will become a development platform for cloud-native supercomputing, hosting multiple users and applications with bare-metal performance while securely isolating workloads.

“That’s the architecture of choice for next-generation supercomputing and HPC clouds,” said Buck.

Exascale in Europe

In Europe, NVIDIA and SiPearl are collaborating to expand the ecosystem of developers building exascale computing on Arm. The work will help the region’s users port applications to systems that use SiPearl’s Rhea and future Arm-based CPUs together with NVIDIA accelerated computing and networking technologies.

Japan’s Center for Computational Sciences, at the University of Tsukuba, is pairing NVIDIA H100 Tensor Core GPUs and x86 CPUs on an NVIDIA Quantum-2 InfiniBand platform. The new supercomputer will tackle jobs in climatology, astrophysics, big data, AI and more.

The new system will join the 71% on the latest TOP500 list of supercomputers that have adopted NVIDIA technologies. In addition, 80% of new systems on the list also use NVIDIA GPUs, networks or both and NVIDIA’s networking platform is the most popular interconnect for TOP500 systems.

HPC users adopt NVIDIA technologies because they deliver the highest application performance for established supercomputing workloads — simulation, machine learning, real-time edge processing — as well as emerging workloads like quantum simulations and digital twins.

Powering Up With Omniverse

Showing what these systems can do, Buck played a demo of a virtual fusion power plant that researchers in the U.K. Atomic Energy Authority and the University of Manchester are building in NVIDIA Omniverse. The digital twin aims to simulate in real time the entire power station, its robotic components — even the behavior of the fusion plasma at its core.

NVIDIA Omniverse, a 3D design collaboration and world simulation platform, lets distant researchers on the project work together in real time while using different 3D applications. They aim to enhance their work with NVIDIA Modulus, a framework for creating physics-informed AI models.

“It’s incredibly intricate work that’s paving the way for tomorrow’s clean renewable energy sources,” said Buck.

AI for Medical Imaging

Separately, Buck described how researchers created a library of 100,000 synthetic images of the human brain on NVIDIA Cambridge-1, a supercomputer dedicated to advances in healthcare with AI.

A team from King’s College London used MONAI, an AI framework for medical imaging, to generate lifelike images that can help researchers see how diseases like Parkinson’s develop.

“This is a great example of HPC+AI making a real contribution to the scientific and research community,” said Buck.

HPC at the Edge

Increasingly, HPC work extends beyond the supercomputer center. Observatories, satellites and new kinds of lab instruments need to stream and visualize data in real time.

For example, work in lightsheet microscopy at Lawrence Berkeley National Lab is using NVIDIA Clara Holoscan to see life in real time at nanometer scale, work that would require several days on CPUs.

To help bring supercomputing to the edge, NVIDIA is developing Holoscan for HPC, a highly scalable version of our imaging software to accelerate any scientific discovery. It will run across accelerated platforms from Jetson AGX modules and appliances to quad A100 servers.

“We can’t wait to see what researchers will do with this software,” said Buck.

Speeding Quantum Simulations

In yet another vector of supercomputing, Buck reported on the rapid adoption of NVIDIA cuQuantum, a software development kit to accelerate quantum circuit simulations on GPUs.

Dozens of organizations are already using it in research across many fields. It’s integrated into major quantum software frameworks so users can access GPU acceleration without any additional coding.

Most recently, AWS announced the availability of cuQuantum in its Braket service. And it demonstrated how cuQuantum can provide up to a 900x speedup on quantum machine learning workloads while reducing costs 3.5x.

“Quantum computing has tremendous potential, and simulating quantum computers on GPU supercomputers is essential to move us closer to valuable quantum computing” said Buck. “We’re really excited to be at the forefront of this work,” he added.

A video of the full address will be posted here Tuesday, March 31 at 9am PT.

The post NVIDIA Accelerates AI, Digital Twins, Quantum Computing and Edge HPC at ISC 2022 appeared first on NVIDIA Blog.

Read More

The Man With 100,000 Brains: AI’s Big Donation to Science

Jorge Cardoso wears many hats, and that’s appropriate given he has so many brains. A hundred thousand of them to be exact.

Cardoso is a teacher, a CTO, an entrepreneur, a founding member of the MONAI open source consortium and a researcher in AI for medical imaging. In that last role, Cardoso and his team just discovered ways to create realistic, high resolution 3D images of human brains with AI.

The researcher at King’s College London and CTO at the London AI Centre is making 100,000 synthetic brain images available free to healthcare researchers. It’s a treasure trove that could accelerate understanding of dementia, aging or any sort of brain disease.

Accelerating AI in Healthcare

“In the past, many researchers avoided working in healthcare because they couldn’t get enough good data, but now they can,” said Cardoso.

“We want to direct the energy of AI research into healthcare,” he said.

It’s a major donation compared to the world’s largest repository of freely available brain images. The UK Biobank currently maintains multiple brain images taken from more than 50,000 participants, curated at an estimated cost of $150 million.

Synthetic Data for Science

The images represent an emerging branch in healthcare of synthetic data, something that’s already widely used in computer vision for consumer and business apps. Ironically, those fields also have access to open datasets with millions of real-world images.

By contrast, medical images are relatively scarce, typically only available to researchers connected to large hospitals, given the need to protect patient privacy. Even then, medical images tend to reflect the demographics the hospital serves, not necessarily the broader population.

A fortunate feature of the new AI approach is it can make images to order. Female brains, male brains, old ones, young ones, brains with or without disease. Plug in what you need, and it creates them.

Though they’re simulated, the images are highly useful because they preserve key biological characteristics, so they look and act like real brains would.

Scaling with MONAI on Cambridge-1

The work required a supercomputer running super software.

NVIDIA Cambridge-1, a supercomputer dedicated to breakthrough AI research in healthcare, was the engine. MONAI, an AI framework for medical imaging, provided the software fuel.

Together they created an AI factory for synthetic data that let researchers run hundreds of experiments, choose the best AI models and run inference to generate images.

“We couldn’t have done this work without Cambridge-1 and MONAI, it just wouldn’t have happened,” Cardoso said.

Massive Images, Up to 10x Speedups

An NVIDIA DGX SuperPOD, Cambridge-1 packs 640 NVIDIA A100 Tensor Core GPUs, each with enough memory to process one or two of the team’s massive images made up of 16 million 3D pixels.

MONAI’s building blocks include domain-specific data loaders, metrics, GPU accelerated transforms and an optimized workflow engine. The software’s smart caching and multi-node scaling can accelerate jobs up to 10x, said Cardoso.

He also credited cuDNN and “the whole NVIDIA AI software stack that helped us work much faster.”

Beyond the Brain

Cardoso is working with Health Data Research UK, a national repository, to host the 100,000 brain images. The AI models will be available, too, so researchers can create whatever images they need.

There’s more. The team is exploring how the models can make 3D images of any part of the human anatomy in any mode of medical imaging — MRIs, CAT or PET scans, you name it.

“In fact, this technique can be applied to any volumetric image,” he said, noting users may need to optimize the models for different types of images.

Many Directions Ahead

The work points to many directions Cardoso described enthusiastically as if unloading the contents of multiple minds.

Synthetic images will help researchers see how diseases evolve over time. Meanwhile his team is still exploring how to apply the work to body parts beyond the brain and what kinds of synthetic images (MRI, CAT, PET) are most useful.

The possibilities are exciting and, like his many roles, “it can be a bit overwhelming,” he said. “There are so many different things we can start thinking about now.”

The post The Man With 100,000 Brains: AI’s Big Donation to Science appeared first on NVIDIA Blog.

Read More

The Road to the Hybrid Quantum-HPC Data Center Starts Here

It’s time to start building tomorrow’s hybrid quantum computers.

The motivation is compelling, the path is clear and key components for the job are available today.

Quantum computing has the potential to bust through some of today’s toughest challenges, advancing everything from drug discovery to weather forecasting. In short, quantum computing will play a huge role in HPC’s future.

Today’s Quantum Simulations

Creating that future won’t be easy, but the tools to get started are here.

Taking the first steps forward, today’s supercomputers are simulating quantum computing jobs at scale and performance levels beyond the reach of today’s relatively small, error-prone quantum systems.

Dozens of quantum organizations are already using the NVIDIA cuQuantum software development kit to accelerate their quantum circuit simulations on GPUs.

Most recently, AWS announced the availability of cuQuantum in its Braket service. It also demonstrated on Braket how cuQuantum can provide up to a 900x speedup on quantum machine learning workloads.

And cuQuantum now enables accelerated computing on the major quantum software frameworks, including Google’s qsim, IBM’s Qiskit Aer, Xanadu’s PennyLane and Classiq’s Quantum Algorithm Design platform. That means users of those frameworks can access GPU acceleration without any additional coding.

Quantum-Powered Drug Discovery

Today, Menten AI joins companies using cuQuantum to support its quantum work.

The Bay Area drug-discovery startup will use cuQuantum’s tensor network library to simulate protein interactions and optimize new drug molecules. It aims to harness the potential of quantum computing to speed up drug design, a field that, like chemistry itself, is thought to be among the first to benefit from quantum acceleration.

Specifically, Menten AI is developing a suite of quantum computing algorithms including quantum machine learning to break through computationally demanding problems in therapeutic design.

“While quantum computing hardware capable of running these algorithms is still being developed, classical computing tools like NVIDIA cuQuantum are crucial for advancing quantum algorithm development,” said Alexey Galda, a principal scientist at Menten AI.

Forging a Quantum Link

As quantum systems evolve, the next big leap is a move to hybrid systems: quantum and classical computers that work together. Researchers share a vision of systems-level quantum processors, or QPUs, that act as a new and powerful class of accelerators.

So, one of the biggest jobs ahead is bridging classical and quantum systems into hybrid quantum computers. This work has two major components.

First, we need a fast, low-latency connection between GPUs and QPUs. That will let hybrid systems use GPUs for classical jobs where they excel, like circuit optimization, calibration and error correction.

GPUs can speed the execution time of these steps and slash communication latency between classical and quantum computers, the main bottlenecks for today’s hybrid quantum jobs.

Second, the industry needs a unified programming model with tools that are efficient and easy to use. Our experience in HPC and AI has taught us and our users the value of a solid software stack.

Right Tools for the Job

To program QPUs today, researchers are forced to use the quantum equivalent of low-level assembly code, something outside of the reach of scientists who aren’t experts in quantum computing. In addition, developers lack a unified programming model and compiler toolchain that would let them run their work on any QPU.

This needs to change, and it will. In a March blog, we discussed some of our initial work toward a better programming model.

To efficiently find ways quantum computers can accelerate their work, scientists need to easily port parts of their HPC apps first to a simulated QPU, then to a real one. That requires a compiler enabling them to work at high performance levels and in familiar ways.

With the combination of GPU-accelerated simulation tools and a programming model and compiler toolchain to tie it all together, HPC researchers will be empowered to start building tomorrow’s hybrid quantum data centers.

How to Get Started

For some, quantum computing may sound like science fiction, a future decades away. The fact is, every year researchers are building more and larger quantum systems.

NVIDIA is fully engaged in this work and we invite you to join us in building tomorrow’s hybrid quantum systems today.

To learn more, you can watch a GTC session and attend an ISC tutorial on the topic. For a deep dive into what you can do with GPUs today, read about our State Vector and Tensor Network libraries.

The post The Road to the Hybrid Quantum-HPC Data Center Starts Here appeared first on NVIDIA Blog.

Read More