AI at the Point of Care: Startup’s Portable Scanner Diagnoses Brain Stroke in Minutes

AI at the Point of Care: Startup’s Portable Scanner Diagnoses Brain Stroke in Minutes

For every minute that a stroke is left untreated, the average patient loses nearly 2 million neurons. This means that for each hour in which treatment fails to occur, the brain loses as many neurons as it does in more than three and a half years of normal aging.

With one of the world’s first portable brain scanners for stroke diagnosis, Australia-based healthcare technology developer EMVision is on a mission to enable quicker triage and treatment to reduce such devastating impacts.

The NVIDIA Inception member’s EMVision device fits like a helmet and can be used at the point of care and in ambulances for prehospital stroke diagnosis. It relies on electromagnetic imaging technology and uses NVIDIA-powered AI to distinguish between ischaemic and haemorrhagic strokes — clots and bleeds — in just minutes.

A cart-based version of the device, built using the NVIDIA Jetson edge AI platform and NVIDIA DGX systems, can also help with routine monitoring of a patient post-intervention to inform their progress and recovery.

“With EMVision, the healthcare community can access advanced, portable solutions that will assist in making critical decisions and interventions earlier, when time is of the essence,” said Ron Weinberger, CEO of EMVision. “This means we can provide faster stroke diagnosis and treatment to ensure fewer disability outcomes and an improved quality of life for patients.”

Point-of-Care Diagnosis

Traditional neuroimaging techniques, like CT scans and MRIs, produce excellent images but require large, stationary, complex machines and specialist operators, Weinberger said. This limits point-of-care accessibility.

The EMVision device is designed to scan the brain wherever the patient may be — in an ambulance or even at home if monitoring a patient who has a history of stroke.

“Whether for a new, acute stroke or a complication of an existing stroke, urgent brain imaging is required before correct triage, treatment or intervention decisions can be made,” Weinberger said.

The startup has developed and validated novel electromagnetic brain scanner hardware and AI algorithms capable of classifying and localizing a stroke, as well as creating an anatomical reconstruction of the patient’s brain.

“NVIDIA accelerated computing has played an important role in the development of EMVision’s technology, from hardware verification and algorithm development to rapid image reconstruction and AI-powered decision making,” Weinberger said. “With NVIDIA’s support, we are set to transform stroke diagnosis and care for patients around the world.”

EMVision uses NVIDIA DGX for hardware verification and optimization, as well as for prototyping and training AI models. EMVision has trained its AI models 10x faster using NVIDIA DGX compared with other systems, according to Weinberger.

Each brain scanner has an NVIDIA Jetson AGX Xavier module on board for energy-efficient AI inference at the edge. And the startup is looking to use NVIDIA Jetson Orin Nano modules for next-generation edge AI.

“The interactions between low-energy electromagnetic signals and brain tissue are incredibly complex,” Weinberger said. “Making sense of these signal interactions to identify if pathologies are present and recreate quality images wouldn’t be possible without the massive power of NVIDIA GPU-accelerated computing.”

As a member of NVIDIA Inception, a free, global program for cutting-edge startups, EMVision has shortened product development cycles and go-to-market time, Weinberger added.

Subscribe to NVIDIA healthcare news and learn more about NVIDIA Inception.

The post AI at the Point of Care: Startup’s Portable Scanner Diagnoses Brain Stroke in Minutes appeared first on NVIDIA Blog.

Read More

Speech AI Expands Global Reach With Telugu Language Breakthrough

Speech AI Expands Global Reach With Telugu Language Breakthrough

More than 75 million people speak Telugu, predominantly in India’s southern regions, making it one of the most widely spoken languages in the country.

Despite such prevalence, Telugu is considered a low-resource language when it comes to speech AI. This means there aren’t enough hours’ worth of speech datasets to easily and accurately create AI models for automatic speech recognition (ASR) in Telugu.

And that means billions of people are left out of using ASR to improve transcription, translation and additional speech AI applications in Telugu and other low-resource languages.

To build an ASR model for Telugu, the NVIDIA speech AI team turned to the NVIDIA NeMo framework for developing and training state-of-the-art conversational AI models. The model won first place in a competition conducted in October by IIIT-Hyderabad, one of India’s most prestigious institutes for research and higher education.

NVIDIA placed first in accuracy for both tracks of the Telugu ASR Challenge, which was held in collaboration with the Technology Development for Indian Languages program and India’s Ministry of Electronics and Information Technology as a part of its National Language Translation Mission.

For the closed track, participants had to use around 2,000 hours of a Telugu-only training dataset provided by the competition organizers. And for the open track, participants could use any datasets and pretrained AI models to build the Telugu ASR model.

NVIDIA NeMo-powered models topped the leaderboards with a word error rate of approximately 13% and 12% for the closed and open tracks, respectively, outperforming by a large margin all models built on popular ASR frameworks like ESPnet, Kaldi, SpeechBrain and others.

“What sets NVIDIA NeMo apart is that we open source all of the models we have — so people can easily fine-tune the models and do transfer learning on them for their use cases,” said Nithin Koluguri, a senior research scientist on the conversational AI team at NVIDIA. “NeMo is also one of the only toolkits that supports scaling training to multi-GPU systems and multi-node clusters.”

Building the Telugu ASR Model

The first step in creating the award-winning model, Koluguri said, was to preprocess the data.

Koluguri and his colleague Megh Makwana, an applied deep learning solution architect manager at NVIDIA, removed invalid letters and punctuation marks from the speech dataset that was provided for the closed track of the competition.

“Our biggest challenge was dealing with the noisy data,” Koluguri said. “This is when the audio and the transcript don’t match — in this case you cannot guarantee the accuracy of the ground-truth transcript you’re training on.”

The team cleaned up the audio clips by cutting them to be less than 20 seconds, chopped out clips of less than 1 second and removed sentences with a greater-than-30 character rate, which measures characters spoken per second.

Makwana then used NeMo to train the ASR model for 160 epochs, or full cycles through the dataset, which had 120 million parameters.

For the competition’s open track, the team used models pretrained with 36,000 hours of data on all 40 languages spoken in India. Fine-tuning this model for the Telugu language took around three days using an NVIDIA DGX system, according to Makwana.

Inference test results were then shared with the competition organizers. NVIDIA won with around 2% better word error rates than the second-place participant. This is a huge margin for speech AI, according to Koluguri.

“The impact of ASR model development is very high, especially for low-resource languages,” he added. “If a company comes forward and sets a baseline model, as we did for this competition, people can build on top of it with the NeMo toolkit to make transcription, translation and other ASR applications more accessible for languages where speech AI is not yet prevalent.”

NVIDIA Expands Speech AI for Low-Resource Languages

“ASR is gaining a lot of momentum in India majorly because it will allow digital platforms to onboard and engage with billions of citizens through voice-assistance services,” Makwana said.

And the process for building the Telugu model, as outlined above, is a technique that can be replicated for any language.

Of around 7,000 world languages, 90% are considered to be low resource for speech AI — representing 3 billion speakers. This doesn’t include dialects, pidgins and accents.

Open sourcing all of its models on the NeMo toolkit is one way NVIDIA is improving linguistic inclusion in the field of speech AI.

In addition, pretrained models for speech AI, as part of the NVIDIA Riva software development kit, are now available in 10 languages — with many additions planned for the future.

And NVIDIA this month hosted its inaugural Speech AI Summit, featuring speakers from Google, Meta, Mozilla Common Voice and more. Learn more about “Unlocking Speech AI Technology for Global Language Users” by watching the presentation on demand.

Get started building and training state-of-the-art conversational AI models with NVIDIA NeMo.

The post Speech AI Expands Global Reach With Telugu Language Breakthrough appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Cloud Architect Takes Infrastructure Visualization to New Heights With NVIDIA Omniverse

Meet the Omnivore: Cloud Architect Takes Infrastructure Visualization to New Heights With NVIDIA Omniverse

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

As a Microsoft Certified Azure cloud specialist and DevOps automation engineer, Gavin Stevens is deeply in tune with cloud architect workflows.

He noticed an opportunity to improve the abilities of cloud architects to visualize their infrastructure — the combination of hardware and software necessary for cloud computing — by creating a 3D layout of it.

So Stevens set out to enable this by building an extension for NVIDIA Omniverse — a platform for connecting and building custom 3D pipelines and metaverse applications.

Dubbed Meta Cloud Explorer, the open-source extension generates digital 3D models of engineers’ cloud infrastructure components at scale, based on contextual metadata from their Azure cloud portals.

The visualization can then be organized by group, location, subscription and resource type. It also displays infrastructure layouts and costs on various planes. This can help cloud architects gain insights to optimize resources, reduce costs and improve customer experiences.

“There’s no shortage of ‘infrastructure diagram generation’ tools that can produce 2D representations of your cloud infrastructure,” Stevens said. “But most of these tools present a tightly focused exploration context, where it’s difficult to see your infrastructure at scale.”

Meta Cloud Explorer, instead, displays 3D representations that can be rearranged at scale. It’s one of the winning submissions for the inaugural #ExtendOmniverse contest, where developers were invited to create their own Omniverse extension for a chance to win an NVIDIA RTX GPU.

Omniverse extensions are core building blocks that let anyone create and extend functions of Omniverse apps using the popular Python and C++ programming languages.

Building Custom Workflow Tools

Stevens, who’s based in Scottsdale, Arizona, learned how to build the Omniverse extension in just a few months by attending community livestreams, learning Python and prototyping user interfaces based on sample resources.

He first transformed Microsoft Azure’s open-source 2D icons — representing storage accounts, web apps, databases and more — into 3D assets using Blender software. He easily brought these into Omniverse with Universal Scene Description (USD), an open-source, extensible file framework that serves as the common language for building virtual worlds and the metaverse.

Stevens then composed a 3D layout, arranging and visualizing the infrastructure services based on data such as location, type and cost by implementing a custom packing and layout algorithm. He also created a user interface directly in the scene to display details such as a cluster’s total cost or a service’s status.

“Omniverse takes care of the rendering and helps developers work at a higher level to easily visualize things in a 3D space,” Stevens said. “And USD makes it seamless to reference and position 3D objects within scenes.”

Dive deeper into Stevens’ workflow by watching this video:

Stevens is now planning to expand Meta Cloud Explorer’s capabilities to build an advanced software-as-a-service that enables users to create infrastructure from template libraries, learn about new architecture techniques and simulate design changes.

Being able to manipulate cloud infrastructure layouts in 3D, or even in virtual reality, would open up new possibilities for developers and cloud engineers to realize a customer’s vision, Stevens said.

“I’m not sure how you could even do this without Omniverse,” he added. “Omniverse Kit provides a dynamic, easy-to-use platform for building metaverse applications. And the ability to connect external application programming interfaces and data sources opens up flexibility when using Omniverse.”

Developers like Stevens can enhance their workflows with the recent Omniverse beta release, which includes major updates to core reference applications and tools for developers, creators and novices looking to build metaverse applications.

Join In on the Creation

Creators and developers across the world can download NVIDIA Omniverse for free, and enterprise teams can use the platform for their 3D projects.

Discover how to build an Omniverse extension in less than 10 minutes.

To find out how to accelerate cloud workflows, join NVIDIA at AWS re:Invent, running through Friday, Dec. 2.

For a deeper dive into developing on Omniverse, watch the on-demand NVIDIA GTC session, “How to Build Extensions and Apps for Virtual Worlds With NVIDIA Omniverse.”

Find additional documentation and tutorials in the Omniverse Resource Center, which details how developers can build custom USD-based applications and extensions for the platform.

To discover more free tools, training and a community for developers, join the NVIDIA Developer Program.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

The post Meet the Omnivore: Cloud Architect Takes Infrastructure Visualization to New Heights With NVIDIA Omniverse appeared first on NVIDIA Blog.

Read More

Cheers to AI: Monarch Tractor Launches First Commercially Available Electric, ‘Driver Optional’ Smart Tractor

Cheers to AI: Monarch Tractor Launches First Commercially Available Electric, ‘Driver Optional’ Smart Tractor

Livermore, Calif., renowned for research and vineyards, is plowing in a new distinction: the birthplace of the first commercially available smart tractor.

Local startup Monarch Tractor has announced the first of six Founder Series MK-V tractors are rolling off the production line at its headquarters. Constellation Brands, a leading wine and spirits producer and beer importer, will be the first customer given keys  at a launch event today.

The debut caps a two-year development sprint since Monarch, founded in 2018, hatched plans to deliver its smart tractor, complete with the energy-efficient NVIDIA Jetson edge AI platform. The tractor combines electrification, automation, and data analysis to help farmers reduce their carbon footprint, improve field safety, streamline farming operations, and increase their bottom lines.

The MK-V tractor cuts energy costs and diesel emissions, while also helping reduce harmful herbicides, which are expensive and deplete the soil.

“With precision ag, autonomy and AI, data will decrease the volume of chemicals used, which is good for the soil, good for the farmer from a profitability standpoint, and good for the consumer,” said Praveen Penmetsa, CEO of Monarch Tractor.

The delivery of MK-V tractors to Constellation Brands will be followed with additional tractor shipments to family farms and large corporate customers, according to the company.

Monarch is a member of the NVIDIA Inception program, which provides startups with technology support and AI platforms guidance.

Leading Farming AI Wave of Clean Tractors

Monarch Tractor founders include veterans of Silicon Valley’s EV scene who worked together at startup Zoox, now Amazon owned. Carlo Mondavi, from the Napa Valley Mondavi winery family, is a sustainability-focused vintner and chief farming officer.  Mark Schwager, former Tesla Gigafactory chief, is president; Zachary Omohundro, a robotics Ph.D. from Carnegie Mellon, is CTO; Penmetsa is an autonomy and mobility engineer.

“The marriage of NVIDIA accelerated computing with Jetson edge AI on our Monarch MK-V has helped our customers reduce the use of unneeded herbicides with our cutting-edge, zero-emission tractor – this revolutionary technology is helping our planet’s soil, waterways and biodiversity,” said Carlo Mondavi.

Penmetsa likens the revolutionary new tractor to paradigm shifts in PCs and smartphones, enablers of world-changing applications. Monarch’s role, he said, is as the hub to enable smart implements — precision sprayers, harvesters and more — for computer vision applications to help automate farming.

In 2021, Monarch launched pilot test models for commercial use at Wente Vineyards, also based in Livermore. The trial at Wente compared its energy usage to that of a diesel tractor, noting Monarch saved more than $2,600 in annual expenses.

Monarch has raised more than $110 million in funding. Strategic investors include Japanese auto parts maker Musashi Seimitsu Industry Co; CNH Industrial, an agricultural equipment maker; and VST Tillers Tractors, an India-based equipment maker and dealer of tractors and implements.

It recently signed a contract manufacturing agreement with Hon Hai Technology Group Foxconn to build the MK-V and its battery packs at the Mahoning Valley, Ohio, plant.

As a wave of AI sweeps farming, developers are working to support more sustainable farming practices.

The NVIDIA Jetson platform provides energy-efficient computing to the MK-V, which offers advances in battery performance.

What a Jetson-Supported Monarch Founder Series MK-V Can Do

Tapping into six NVIDIA Jetson Xavier NX SOMs (system on modules), Monarch’s Founder Series MK-V tractors are essentially roving robots packing supercomputing.

Monarch has harnessed Jetson to deliver tractors that can safely traverse rows within agriculture fields using only cameras. “This is important in certain agriculture environments because there may be no GPS signal,” said Penmetsa. “It’s also crucial for safety as the Monarch is intended for totally driverless operation.”

The Founder Series MK-V runs two 3D cameras and six standard cameras. With the six Jetson edge AI modules on board, it can run models for multiple farming tasks when paired with different implements.

Supporting more sustainable farming practices, computer vision applications are available to fine-tune with transfer learning for the Monarch platform to develop precision spraying and other options.

Delivering Farm-Apps-as-a-Service 

Monarch offers a core of main applications to assist farms with AI, available in a software-as-a-service model on its platform.

The Founder Series MK-V has some basic functions on its platform as well, such as sending alerts when on a low charge or there’s an unidentified object obstructing a path. It will also shut down from spraying if its camera-based vision platform identifies a human.

The tractor collects and analyzes crop data daily and can process data from current and next-generation implements equipped with sensors and imaging. This data can be used for real-time implement adjustments, long-term yield estimates, current growth stages and other plant and crop health metrics.

Wider availability of the tractor begins a new chapter in improved farming practices.

“The marriage of NVIDIA accelerated computing with Jetson edge AI on our Monarch MK-V has helped our customers reduce the use of unneeded herbicides with our cutting-edge, zero-emission tractor – this revolutionary technology is helping our planet’s soil, waterways and biodiversity,” said Mondavi.

Learn more about NVIDIA Isaac platform for robotics and apply to join NVIDIA Inception.

The post Cheers to AI: Monarch Tractor Launches First Commercially Available Electric, ‘Driver Optional’ Smart Tractor appeared first on NVIDIA Blog.

Read More

GFN Thursday Dashes Into December With 22 New Games, Including ‘Marvel Midnight Suns’ Streaming Soon

GFN Thursday Dashes Into December With 22 New Games, Including ‘Marvel Midnight Suns’ Streaming Soon

It’s a new month, which means GeForce NOW’s got the list of 22 new games arriving in December.

Rise up for Marvel’s Midnight Suns, from publisher 2K Games, streaming on GeForce NOW later this month.

Then get ready to move out, members. Battlefield 2042 is the latest game from the Electronic Arts catalog streaming on GeForce NOW. It arrives just in time for the free access weekend, running Dec. 1-4, and comes with a members-only reward.

These games lead the charge, among the six additions streaming this week.

Time to Assemble 

From the creators of XCOM, and published by 2K Games, Marvel’s Midnight Suns is a tactical role-playing game set in the darker, supernatural side of the Marvel Universe. It launches on Steam on Friday, Dec. 2, with GeForce NOW members getting into the action later this month.

Marvels Midnight Suns
A new Sun must rise.

Play as “The Hunter,” a legendary demon slayer with a mysterious past and the first-ever customizable superhero in the Marvel Universe. Put together a team of legendary Marvel heroes, including Scarlet Witch, Spiderman, Wolverine, Blade and Captain America.

These heroes must fight together to stop the Mother of Demons from completing an ancient prophecy. In revolutionary card-based tactical battles, players can use ability cards on enemies, themselves or the environment.

Stay tuned for updates on the game’s release on GeForce NOW.

Step Foot Onto the Battlefield

Prepare to charge into Battlefield 2042, the first-person shooter that marks the return to the iconic all-out warfare of the widely popular franchise from Electronic Arts. Available today along with the latest update, “Season 3: Escalation,” the game marks the 19th title from EA to join GeForce NOW.

Adapt and overcome in a near-future world transformed by disorder. Choose your role on the battlefield with class specialists and form a squad to bring a cutting-edge arsenal into dynamically changing battlegrounds of unprecedented scale and epic destruction.

With RTX ON, EA and DICE introduced ray-traced ambient occlusion in Battlefield 2042. This accurately adds shadows where game elements occlude light, whether between a soldier and a wall, a tank and the tarmac, or foliage and the ground. Members can use NVIDIA DLSS to get the definitive PC experience, with maxed-out graphics, high frame rates and uncompromised image quality.

The game comes with a special reward for GeForce NOW members. To opt in and receive rewards, log in to your NVIDIA account and select “GEFORCE NOW” from the header, then scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that shows up to start receiving special offers and in-game goodies.

Experience the action across compatible devices and take gaming to the max with all the perks of an RTX 3080 membership, like 4K resolution, RTX ON and maximized gaming sessions.

The More the Merrier

The Knight Witch on GeForce NOW
Cast devastating card-based spells, forge close bonds and make moral choices — all in a quest to save your home.

Members can look for the following six games available to play this week:

  • The Knight Witch (New release on Steam, Nov. 29)
  • Warhammer 40,000: Darktide (New Release on Steam, Nov. 30)
  • Fort Triumph (Free on Epic Games Store, Dec. 1-8)
  • Battlefield 2042 (Steam and Origin)
  • Alien Swarm: Reactive Drop (Steam)
  • Stormworks: Build and Rescue (Steam)

Then time to unwrap the rest of the list of 22 games coming this month: 

  • Marvel’s Midnight Suns (New release on Steam, coming soon)
  • Art of the Rail (New release on Steam, Dec. 4)
  • Swordship (New release on Steam, Dec. 5)
  • Knights of Honor II: Sovereign (New release on Steam, Dec. 6)
  • Chained Echoes (New release on Steam, Dec. 7)
  • IXION (New release on Steam, Dec. 7)
  • Togges (New release on Steam, Dec. 7)
  • SAMURAI MAIDEN (New release on Steam, Dec. 8)
  • Wavetale (New release on Steam, Dec. 12)
  • Master of Magic (New release on Steam, Dec. 13)
  • BRAWLHALLA (Ubisoft Connect)
  • Carrier Command 2  (Steam)
  • Cosmoteer: Starship Architect & Commander (Steam)
  • Dakar Desert Rally (Epic Game Store)
  • Dinkum (Steam)
  • Floodland (Steam)
  • Project Hospital (Steam)

Nothing Left Behind From November

On top of the 26 games announced in November, members can play the extra 10 games that were added to GeForce NOW last month:

And good things come in small packages — for the perfect stocking stuffer or last-minute gift, look no further than GeForce NOW. Physical or digital gift cards are always available, and tomorrow is the last day to get in on the “Green Thursday Black Friday” deal.

Before you start off a super weekend of gaming, there’s only one choice left to make. Let us know your pick on Twitter or in the comments below.

The post GFN Thursday Dashes Into December With 22 New Games, Including ‘Marvel Midnight Suns’ Streaming Soon appeared first on NVIDIA Blog.

Read More

Qubit Pharmaceuticals Accelerates Drug Discovery With Hybrid Quantum Computing

Qubit Pharmaceuticals Accelerates Drug Discovery With Hybrid Quantum Computing

The promise of quantum computing is to solve unsolvable problems. And companies are already making headway with hybrid approaches — those that combine classical and quantum computing — to tackle challenges like drug discovery for incurable diseases.

By accelerating drug molecule simulation and modeling with hybrid quantum computing, startup Qubit Pharmaceuticals is significantly reducing the time and investment needed to identify promising treatments in oncology, inflammatory diseases and antivirals.

Qubit is building a drug discovery platform using the NVIDIA QODA programming model for hybrid quantum-classical computers and the startup’s Atlas software suite. Atlas creates detailed simulations of physical molecules, accelerating calculations by a factor of 100,000 compared to traditional research methods.

Founded in 2020, the Paris and Boston-based company is a member of NVIDIA Inception, a program that offers go-to-market support, expertise and technology for cutting-edge startups.

Qubit has one of France’s largest GPU supercomputers for drug discovery, powered by NVIDIA DGX systems. The startup aims for pharmaceutical companies to begin testing their first drug candidates discovered through its GPU-accelerated research next year.

“By combining NVIDIA’s computational power and leading-edge software with Qubit’s simulation and molecular modeling capabilities, we are confident in our ability to dramatically reduce drug discovery time and cut its cost by a factor of 10,” said Robert Marino, president of Qubit Pharmaceuticals. “This unique collaboration should enable us to develop the first quantum physics algorithms applied to drug discovery.”

Tapping Unprecedented Computational Capabilities 

Computational drug discovery involves generating high-resolution simulations of potential drug molecules and predicting how well those molecules might bind to a target protein in the body.

For accurate results, researchers need to perform massive sampling, simulating hundreds of different conformations — possible spatial arrangements of a molecule’s atoms. They must also correctly model molecules’ force fields, the electric charges that predict affinity, or how a molecule will bind to another.

This simulation and modeling requires high performance computing, so Qubit selected an in-house supercomputer built with NVIDIA DGX systems and other NVIDIA-accelerated servers, totaling 200 NVIDIA Tensor Core GPUs. The supercomputer runs Qubit’s Atlas software, performing in just a few hours calculations that would take several years with conventional methods.

Atlas models quantum physics at the microscopic level to achieve maximum accuracy. The Qubit team is adopting NVIDIA QODA to explore the hybrid use of GPU-accelerated supercomputers and quantum computers, where QPUs, or quantum processing units, could one day speed up key software kernels for molecular modeling.

Using the NVIDIA cuQuantum SDK, Qubit’s developers can simulate quantum circuits, allowing the team to design algorithms ready to run on future quantum computers.

AI for Every Stage of Drug Discovery

Qubit estimates that while conventional research methods require pharmaceutical developers to start by synthesizing an average of 5,000 drug compounds before preclinical testing to bring a single drug to market, a simulation-based drug discovery approach could reduce the figure to about 200 — saving hundreds of millions of dollars and years of development time.

The company’s Atlas software includes AI algorithms for every stage of the drug discovery cycle. To support target characterization, where researchers analyze a protein that plays a role in disease, Atlas supports molecular dynamics simulations at microsecond timescales — helping scientists identify new pockets for drug molecules to bind with the protein.

During drug candidate screening and validation, researchers can use AI models that help narrow the field of potential molecules and generate novel compounds. Qubit is also developing additional filters that predict a candidate molecule’s druggability, safety and cross-reactivity.

Learn more about Qubit’s HPC and quantum-accelerated molecular dynamics software from company co-founders Jean-Philip Piquemal and Louis Lagardère through NVIDIA On-Demand.

Main image courtesy of Qubit Pharmaceuticals.

The post Qubit Pharmaceuticals Accelerates Drug Discovery With Hybrid Quantum Computing appeared first on NVIDIA Blog.

Read More

Siemens Taps Omniverse Replicator on AWS for Synthetic Data Generation to Accelerate Defect Detection Model Development by 5X

Siemens Taps Omniverse Replicator on AWS for Synthetic Data Generation to Accelerate Defect Detection Model Development by 5X

Industrial leader Siemens is accelerating development of defect detection models with 3D synthetic data generation from NVIDIA Omniverse, the latest manufacturing gains to emerge from an extended partnership for the industrial metaverse that aims to advance digital twins.

The Siemens Xcelerator and NVIDIA Omniverse platforms are building connections to enable full-design-fidelity, live digital twins that connect software-defined AI systems from edge to cloud.

Europe’s largest industrial manufacturer manages a lot of moving parts, so AI-driven defect detection promises to boost quality assurance and yield at massive scale.

But building AI models requires hefty amounts of data, and producing labeled datasets for training models to detect defects is a time-consuming and expensive process. In most cases, such data may not cover all the types of defects or their locations.

Using NVIDIA Replicator and Siemens SynthAI technology, we can procedurally generate sets of photorealistic images using the digital models of our products and production resources and an integrated training pipeline to train ready-to-use models. This speeds up our set-up time for AI inspection models by a factor of five,” said Maximilian Metzner, global lead for autonomous manufacturing systems for electronics at GWE.

As a result, Siemens has begun tapping into NVIDIA Omniverse Replicator running on Amazon G5 instances for synthetic data generation, accelerating its AI model development times from taking “months” to “days,” according to the company.

Synthetic data is turbocharging model development. It’s boosting data sets for everything from German company Festo’s robotic arm work, to efforts at Amazon Robotics using synthetic data to train robots to identify packages.

At Siemens, synthetic data generation is being used beyond defect detection to assist in areas including, but not limited to, robotic bin picking, safety monitoring, welding and wiring inspections, and checking kits of parts.

“The better the synthetic data you have, the less real data you need — obtaining real data is a hassle, so you want to reduce that as much as possible without sacrificing accuracy,” said Alex Greenberg, director of advanced robotics simulation at Siemens Digital Industries Software.

Inspecting Motion Control Devices

The Siemens Motion Control Business Unit produces inverters, drive controllers and motors for more than 30,000 customers worldwide. The lead electronics plant, GWE, based in Erlangen, Germany, has been working on AI-enabled computer vision for defect detection using custom methods and different modes of synthetic data generation.

Common synthetic data generation methods, however, weren’t sufficient for production-ready robustness in some use-cases, leading to a need for real data acquisition and labeling, which could take months.

GWE worked with the Siemens’ Digital Industries Software division to find a better way to produce datasets.

“For many industrial use cases, products are changing rapidly. Materials are changing rapidly. It needs to be automated in a fast way and without a lot of know-how from the endpoint engineer,” said Zac Mann, advanced robotics simulation lead at Siemens Digital Industries Software.

Catching Printed Circuit Board Defects

The challenge at GWE is to catch defects early in the ramp-up of new products and production lines. Waiting for real errors to happen just to enhance the training-datasets is not an option.

One area of focus for defects in a printed circuit board (PCB) is examining the thermal paste that’s applied to some components on the PCB in order to help transfer heat quickly to the attached heatsink, away from the components.

To catch PCB defects, the Siemens Digital Industries Software team took another approach by relying on synthetic data driven by Omniverse Replicator.

With Omniverse, a platform for building custom 3D pipelines and simulating virtual worlds, Siemens can generate scenarios and much more realistic images easily, aided with RTX technology-enabled physics-based rendering and materials.

This enables Siemens to move more quickly and smoothly at developing to close the gap from simulation to reality, said Mann.

“Using Omniverse Replicator and Siemens SynthAI technology, we can procedurally generate sets of photorealistic images using the digital models of our products and production resources and an integrated training pipeline to train ready-to-use models. This speeds up our set-up time for AI inspection models by a factor of five and increases their robustness massively,” said Maximilian Metzner, global lead for autonomous manufacturing systems for electronics at GWE.

Tapping Into Randomization With SynthAI

GWE engineers can now take a 3D CAD model of the PCB and import that into Siemens’ SynthAI tool. SynthAI is designed to build data sets for training AI models.

Tapping into Replicator, SynthAI can access its powerful randomization features to vary the sizes and locations of defects, change lighting, color, texture and more to develop a robust dataset.

Once data is generated with Replicator, it can be run through a defect detection model for initial training. This enables GWE engineers to quickly test and iterate on models, requiring only a small set of data to begin.

“This gives you visibility earlier into the design phase, and it can shorten time to market, which is very important,”  said Greenberg.

Get started using NVIDIA Omniverse Replicator.

The post Siemens Taps Omniverse Replicator on AWS for Synthetic Data Generation to Accelerate Defect Detection Model Development by 5X appeared first on NVIDIA Blog.

Read More

3D Artist and Educator Hsin-Chien Huang Takes VR to the World Stage This Week ‘In the NVIDIA Studio’

3D Artist and Educator Hsin-Chien Huang Takes VR to the World Stage This Week ‘In the NVIDIA Studio’

3D artist, virtual reality expert, storyteller and educator Hsin-Chien Huang shares his unique creator journey and award-winning artwork Samsara this week In the NVIDIA Studio.

A Journey Unlike Any Other

Huang is a distinguished professor in the Department of Design at National Taiwan Normal University.

His creative journey included overcoming a number of obstacles, starting at age 4, when he lost sight in his right eye. His eyesight was impaired for over a decade before he regained it thanks to a Sri Lankan cornea donor.

This singular event proved inspirational, cementing virtual reality as his primary creative field, as it allows him to share with others the world as he uniquely sees it.

When he was getting his driver’s license, Huang registered himself as an organ donor, imagining that the cornea in his right eye will continue its journey to others who may receive it after his death.

Deep in the journey of ‘Samsara.’

In Samsara, one’s consciousness can travel within different individuals and animals.

Color, materials, sound and music are critical story elements that drive the narratives of his artwork, Huang said.

How did we get here?

“Discussing with musicians about adding sound to works always brings me new ideas to revise stories,” he said. “Elements may influence one another, and the process is like an upward spiral where each element develops and fosters each other simultaneously, slowly shaping the story.”

Cool, But How?

Working in VR can often be a nonlinear experience. Huang spends considerable time prototyping and iterating ideas to ensure that they’re feasible and can be shared.

He and his team will program and create multiple 3D animations and interactions. This helps them to examine if their works can convey the exact concept, invoking the body of emotions they hoped for.

Parametric modeling allows for faster, wide-scale edits.

The team makes use of various parametric modeling tools in Autodesk Maya, Houdini, iClone and Unity. The key in setting up 3D geometric objects is that shapes can be changed once parameters such as dimensions or curvatures are modified — removing the need to reshape the model from scratch.

This saves artists lots of time — especially in the conceptual stage — and is critical to the team’s workflow, Huang said.

“We use Unity for integration and interaction, and Xsens and Vicon for motion capture,” he said. Unity’s light baking and Autodesk Maya’s Arnold renderer both require powerful GPUs, and his GeForce RTX 3070 GPU was equal to the task.

The team’s photogrammetry software in RealityCapture also benefits greatly from NVIDIA CUDA acceleration.

Textures applied in Unity.

“Nowadays, a powerful GeForce RTX GPU is an indispensable tool for digital artists.” — Hsin-Chien Huang 

“Although the resolutions of these scanned models are low, it has the aesthetic of pixel art,” Huang said. He processed these models in Unity to give them a unique digital style. NVIDIA DLSS technology powered by his GeForce RTX GPU increases the interactivity of the viewport by using AI to upscale frames rendered at lower resolution while still retaining high-fidelity detail.

When it comes to creating textures, Huang recommends Adobe Substance 3D Painter, which can rapidly create quality, realistic textures for prototyping. RTX-accelerated light and ambient occlusion baking optimize his assets in mere seconds.

Photorealistic details made even more realistic with Topaz Labs Gigapixel AI.

Huang also uses Topaz Labs Gigapixel AI, which uses deep learning to offer better photo quality. Yet again his RTX GPU acccelerates AI for the sharpening of images while retaining high-fidelity details.

Huang is grateful for advancements in technology and their impact on creative possibilities.

“Nowadays, a powerful GeForce RTX GPU is an indispensable tool for digital artists,” he said.

Huang’s increasing popularity and extraordinary talent led him to Hollywood. In 2018, Huang performed a VR demo on hit TV show America’s Got Talent, which left an enormous impression on the judges and audience.

It was the first real-time motion capture and VR experience to be presented on a live stage. Huang said the pressure was intense during his performance as it was a live show and no mistakes could be tolerated.

“I could still sense the thrill and excitement on stage,” he recalled.

VR expert, storyteller and educator Hsin-Chien Huang.

Check out more of Huang’s artwork on his website.

Carry on, Carry on #WinterArtChallenge

Enter NVIDIA Studio’s #WinterArtChallenge, running through the end of the year, by sharing winter-themed art on Instagram, Twitter or Facebook for a chance to be featured on our social media channels.

Like @RippaSats and his fun celebration of penguins.

Be sure to tag #WinterArtChallenge to join.

Get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Artist and Educator Hsin-Chien Huang Takes VR to the World Stage This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

NVIDIA Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents

NVIDIA Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents

Two NVIDIA Research papers — one exploring diffusion-based generative AI models and another on training generalist AI agents — have been honored with NeurIPS 2022 Awards for their contributions to the field of AI and machine learning.

These are among more than 60+ talks, posters and workshops with NVIDIA authors being presented at the NeurIPs conference, taking place this week in New Orleans and next week online.

Synthetic data generation — for images, text or video — is a key theme across several of the NVIDIA-authored papers. Other topics include reinforcement learning, data collection and augmentation, weather models and federated learning.

“AI is an incredibly important technology, and NVIDIA is making fast progress across the gamut — from generative AI to autonomous AI agents,” said Jan Kautz, vice president of learning and perception research at NVIDIA. “In generative AI, we are not only advancing our theoretical understanding of the underlying models, but are also making practical contributions that will reduce the effort of creating realistic virtual worlds and simulations.”

Reimagining the Design of Diffusion-Based Generative Models 

Diffusion-based models have emerged as a groundbreaking technique for generative AI. NVIDIA researchers won an Outstanding Main Track Paper award for work that analyzes the design of diffusion models, proposing improvements that can dramatically improve the efficiency and quality of these models.

The paper breaks down the components of a diffusion model into a modular design, helping developers identify processes that can be adjusted to improve the performance of the entire model. The researchers show that their modifications enable record scores on a metric that assesses the quality of AI-generated images.

Training Generalist AI Agents in a Minecraft-Based Simulation Suite

While researchers have long trained autonomous AI agents in video-game environments such as Starcraft, Dota and Go, these agents are usually specialists in only a few tasks. So NVIDIA researchers turned to Minecraft, the world’s most popular game, to develop a scalable training framework for a generalist agent — one that can successfully execute a wide variety of open-ended tasks.

Dubbed MineDojo, the framework enables an AI agent to learn Minecraft’s flexible gameplay using a massive online database of more than 7,000 wiki pages, millions of Reddit threads and 300,000 hours of recorded gameplay (shown in image at top). The project won an Outstanding Datasets and Benchmarks Paper Award from the NeurIPS committee.

As a proof of concept, the researchers behind MineDojo created a large-scale foundation model, called MineCLIP, that learned to associate YouTube footage of Minecraft gameplay with the video’s transcript, in which the player typically narrates the onscreen action. Using MineCLIP, the team was able to train a reinforcement learning agent capable of performing several tasks in Minecraft without human intervention.

Creating Complex 3D Shapes to Populate Virtual Worlds

Also at NeurIPS is GET3D, a generative AI model that instantly synthesizes 3D shapes based on the category of 2D images it’s trained on, such as buildings, cars or animals. The AI-generated objects have high-fidelity textures and complex geometric details — and are created in a triangle mesh format used in popular graphics software applications. This makes it easy for users to import the shapes into 3D renderers and game engines for further editing.

3D objects generated by GET3D

Named for its ability to Generate Explicit Textured 3D meshes, GET3D was trained on NVIDIA A100 Tensor Core GPUs using around 1 million 2D images of 3D shapes captured from different camera angles. The model can generate around 20 objects a second when running inference on a single NVIDIA GPU.

The AI-generated objects could be used to populate 3D representations of buildings, outdoor spaces or entire cities — digital spaces designed for industries such as gaming, robotics, architecture and social media.

Improving Inverse Rendering Pipelines With Control Over Materials, Lighting

At the most recent CVPR conference, held in New Orleans in June, NVIDIA Research introduced 3D MoMa, an inverse rendering method that enables developers to create 3D objects composed of three distinct parts: a 3D mesh model, materials overlaid on the model, and lighting.

The team has since achieved significant advancements in untangling materials and lighting from the 3D objects — which in turn improves creators’ abilities to edit the AI-generated shapes by swapping materials or adjusting lighting as the object moves around a scene.

The work, which relies on a more realistic shading model that leverages NVIDIA RTX GPU-accelerated ray tracing, is being presented as a poster at NeurIPS.

Enhancing Factual Accuracy of Language Models’ Generated Text 

Another accepted paper at NeurIPS examines a key challenge with pretrained language models: the factual accuracy of AI-generated text.

Language models trained for open-ended text generation often come up with text that includes nonfactual information, since the AI is simply making correlations between words to predict what comes next in a sentence. In the paper, NVIDIA researchers propose techniques to address this limitation, which is necessary before such models can be deployed for real-world applications.

The researchers built the first automatic benchmark to measure the factual accuracy of language models for open-ended text generation, and found that bigger language models with billions of parameters were more factual than smaller ones. The team proposed a new technique, factuality-enhanced training, along with a novel sampling algorithm that together help train language models to generate accurate text — and demonstrated a reduction in the rate of factual errors from 33% to around 15%. 

There are more than 300 NVIDIA researchers around the globe, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars and robotics. Learn more about NVIDIA Research and view NVIDIA’s full list of accepted papers at NeurIPS.

The post NVIDIA Wins NeurIPS Awards for Research on Generative AI, Generalist AI Agents appeared first on NVIDIA Blog.

Read More

MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps

MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps

Delivering AI-accelerated healthcare at scale will take thousands of neural networks working together to cover the breadth of human physiology, diseases and even hospital operations — a significant challenge in today’s smart hospital environment.

MONAI, an open-source medical-imaging AI framework with more than 650,000 downloads, accelerated by NVIDIA, is making it easier to integrate these models into clinical workflows with MONAI Application Packages, or MAPs.

Delivered through MONAI Deploy, a MAP is a way of packaging an AI model that makes it easy to deploy in an existing healthcare ecosystem.

“If someone wanted to deploy several AI models in an imaging department to help experts identify a dozen different conditions, or partially automate the creation of medical imaging reports, it would take an untenable amount of time and resources to get the right hardware and software infrastructure for each one,” said Dr. Ryan Moore at Cincinnati Children’s Hospital. “It used to be possible, but not feasible.”

MAPs simplify the process. When a developer packages an app using the MONAI Deploy Application software development kit, hospitals can easily run it on premises or in the cloud. The MAPs specification also integrates with healthcare IT standards such as DICOM for medical imaging interoperability.

“Until now, most AI models would remain in an R&D loop, rarely reaching patient care,” said Jorge Cardoso, chief technology officer at the London Medical Imaging & AI Centre for Value-Based Healthcare. “MONAI Deploy will help break that loop, making impactful clinical AI a more frequent reality.”

MONAI Deploy Adopted by Hospitals, Healthcare Startups

Healthcare institutions, academic medical centers and AI software developers around the world worldwide are adopting MONAI Deploy, including:

  • Cincinnati Children’s Hospital: The academic medical center is creating a MAP for an AI model that automates total cardiac volume segmentation from CT images, aiding pediatric heart transplant patients in a project funded by the National Institutes of Health.
  • National Health Service in England: The NHS Trusts have deployed its MONAI-based AI Deployment Engine platform, known as AIDE, across four hospitals to provide AI-enabled disease-detection tools to healthcare professionals serving 5 million patients a year.
  • Qure.ai: A member of the NVIDIA Inception program for startups, Qure.ai develops medical imaging AI models for use cases including lung cancer, traumatic brain injuries and tuberculosis. The company is using MAPs to package its solutions for deployment, accelerating its time to clinical impact.
  • SimBioSys: The Chicago-based Inception startup builds 3D virtual representations of patients’ tumors and is using MAPs for precision medicine AI applications that can help predict how a patient will respond to a specific treatment.
  • University of California, San Francisco: UCSF is developing MAPs for several AI models, with applications including hip fracture detection, liver and brain tumor segmentation, and knee and breast cancer classification.

Putting Medical Imaging AI on the MAP

The MAP specification was developed by the MONAI Deploy working group, a team of experts from more than a dozen medical imaging institutions, to benefit AI app developers as well as the clinical and infrastructure platforms that run AI apps.

For developers, MAPs can help accelerate AI model evolution by helping researchers easily package and test their models in a clinical environment. This allows them to collect real-world feedback that helps improve the AI.

For cloud service providers, supporting MAPs — which were designed using cloud-native technologies — enables researchers and companies using MONAI Deploy to run AI applications on their platform, either by using containers or with native app integration. Cloud platforms integrating MONAI Deploy and MAPs include:

  • Amazon HealthLake Imaging: The MAP connector has been integrated with the HealthLake Imaging service, allowing clinicians to view, process and segment medical images in real time.
  • Google Cloud: Google Cloud’s Medical Imaging Suite, designed to make healthcare imaging data more accessible, interoperable and useful, has integrated MONAI into its platform to enable clinicians to deploy AI-assisted annotation tools that help automate the highly manual and repetitive task of labeling medical images.
  • Nuance Precision Imaging Network, powered by Microsoft Azure: Nuance and NVIDIA recently announced a partnership bringing together MONAI and the Nuance Precision Imaging Network, a cloud platform that provides more than 12,000 healthcare facilities with access to AI-powered tools and insights.
  • Oracle Cloud Infrastructure: Oracle and NVIDIA recently announced a collaboration to bring accelerated compute solutions for healthcare, including MONAI Deploy, to Oracle Cloud Infrastructure. Developers can start building MAPs with MONAI Deploy today using NVIDIA containers on the Oracle Cloud Marketplace.

Get started with MONAI and discover how NVIDIA is helping build AI-powered medical imaging ecosystems at this week’s RSNA conference.

The post MAP Once, Run Anywhere: MONAI Introduces Framework for Deploying Medical Imaging AI Apps appeared first on NVIDIA Blog.

Read More