Techman Robot Selects NVIDIA Isaac Sim to Optimize Automated Optical Inspection

Techman Robot Selects NVIDIA Isaac Sim to Optimize Automated Optical Inspection

How do you help robots build better robots? By simulating even more robots.

NVIDIA founder and CEO Jensen Huang today showcased how leading electronics manufacturer Quanta is using AI-enabled robots to inspect the quality of its products.

In his keynote speech at this week’s COMPUTEX trade show in Taipei, Huang presented on how electronics manufacturers are digitalizing their state-of-the art factories

For example, robots from Quanta subsidiary Techman Robot tapped NVIDIA Isaac Sim — a robotics simulation application built on NVIDIA Omniverse — to develop a custom digital twin application to improve inspection on the Taiwan-based electronics provider’s manufacturing line. 

The below demo shows how Techman uses Isaac Sim to optimize the inspection of robots by robots on the manufacturing line. In effect, it’s robots building robots.

Automated optical inspection, or AOI, helps manufacturers more quickly identify defects and deliver high-quality products to their customers around the globe. The NVIDIA Metropolis vision AI framework, now enabled for AOI, is also used to optimize inspection workflows for products ranging from automobiles to circuit boards.

Techman developed AOI with its factory-floor robots by using Isaac Sim to simulate, test and optimize its state-of-the-art collaborative robots, or cobots, while using NVIDIA AI and GPUs for training in the cloud and inference on the robots themselves.

Isaac Sim is built on NVIDIA Omniverse — an open development platform for building and operating industrial metaverse applications.

Unique features of Techman’s robotic AOI solutions include their placement of the inspection camera directly on articulated robotic arms and GPUs integrated in the robot controller.

This allows the bots to inspect areas of products that fixed cameras simply can’t access, as well as use AI at the edge to instantly detect defects.

“The distinctive features of Techman’s robots — compared to other robot brands — lie in their built-in vision system and AI inference engine,” said Scott Huang, chief operations officer at Techman. “NVIDIA RTX GPUs power up their AI performance.”

But programming the movement of these robots can be time consuming.

A developer has to determine the precise arm positions, as well as the most efficient sequence, to capture potentially hundreds of images as quickly as possible.

This can involve several days of effort, exploring tens of thousands of possibilities to determine an optimal solution.

The solution: robot simulation.

Using Omniverse, Techman built a digital twin of the inspection robot — as well as the product to be inspected — in Isaac Sim.

Programming the robot in simulation reduced time spent on the task by over 70%, compared to programming manually on the real robot. Using an accurate 3D model of the product, the application can be developed in the digital twin even before the real product is manufactured, saving valuable time on the production line.

Then, with powerful optimization tools in Isaac Sim, Techman explored a massive number of program options in parallel on NVIDIA GPUs.

The end result was an efficient solution that reduced the cycle time of each inspection by 20%, according to Huang.

Every second saved in inspection time will drop down to the bottom line of Techman’s manufacturing customers.

Gathering and labeling real-world images of defects is costly and time consuming, so Techman turned to synthetic data to improve the quality of inspections. It used the Omniverse Replicator framework to quickly generate high-quality synthetic datasets.

These perfectly labeled images are used to train the AI models in the cloud and dramatically enhance their performance.

And dozens of AI models can be run at the edge — efficiently and with low latency thanks to NVIDIA technology — while inspecting particularly complicated products, some of which take more than 40 models to scrutinize their different aspects.

Learn more about how Isaac Sim on Omniverse, Metropolis and AI are streamlining the optical inspection process across products and industries by joining NVIDIA at COMPUTEX, where the Techman cobots will be on display.

Read More

Electronics Giants Tap Into Industrial Automation With NVIDIA Metropolis for Factories

Electronics Giants Tap Into Industrial Automation With NVIDIA Metropolis for Factories

The $46 trillion global electronics manufacturing industry spans more than 10 million factories worldwide, where much is at stake in producing defect-free products. To drive product excellence, leading electronics manufacturers are adopting NVIDIA Metropolis for Factories.

More than 50 manufacturing giants and industrial automation providers — including Foxconn Industrial Internet, Pegatron, Quanta, Siemens and Wistron — are implementing Metropolis for Factories, NVIDIA founder and CEO Jensen Huang announced during his keynote address at the COMPUTEX technology conference in Taipei.

NVIDIA Metropolis for Factories is a collection of factory automation workflows that enables industrial technology companies and manufacturers to develop, deploy and manage customized quality-control systems that offer a competitive advantage.

Manufacturers globally spend more than $6 trillion a year in pursuit of quality control, and they apply defect detection on nearly every product line. But manual inspections can’t keep up with the demands.

Many manufacturers have automated optical inspection (AOI) systems that can help, but often these have high false detection rates, requiring labor-intensive and costly secondary manual inspections in an already challenging labor market, reducing their value.

NVIDIA Metropolis for Factories now offers a state-of-the-art AI platform and workflows for the development of incredibly accurate inspection applications such as AOI.

Pegatron Drives AOI With Metropolis for Factories 

Leading manufacturer Pegatron, based in Taipei’s Beitou district, is using NVIDIA Metropolis for Factories on its production lines.

Pegatron manufactures everything from motherboards to smartphones, laptops and game consoles. With a dozen manufacturing facilities handling more than 300 products and more than 5,000 parts per day, Pegatron has a lot of quality control to manage across its product portfolio. Further, frequent product updates require ongoing revisions to its AOI systems.

Pegatron is using the entire Metropolis for Factories workflow to support its printed circuit board (PCB) factories with simulation, robotics and automated production inspection. Metropolis for Factories enables the electronics manufacturing giant to quickly update its defect detection models and achieve 99.8% accuracy on its AOI systems, starting with small datasets.

 

Pegatron uses NVIDIA Isaac Sim, a robotic simulator, to program robotic arms in simulation and to model the performance of its fleets of mobile robots.

Tapping into NVIDIA Omniverse Replicator provides synthetic data generation to simulate defects, helping build massive training datasets with domain randomization and other techniques.

In Metropolis, NVIDIA TAO Toolkit allows Pegatron to access pretrained models and transfer learning to build its highly accurate defect detection models from its enhanced datasets.

The NVIDIA DeepStream software development kit can be used to develop optimized intelligent video applications that handle multiple video, image and audio streams. Using DeepStream, Pegatron was able to achieve a 10x improvement in throughput.

Moreover, Omniverse enables Pegatron to run digital twins of its inspection equipment, so it can simulate future inspection processes, promising increased efficiencies to its production workflow.

It’s also used by Quanta subsidiary Techman Robot, which taps Isaac Sim to optimize the inspection of robots by robots on their manufacturing line.

Metropolis for Factories is helping manufacturers like Pegatron to increase production line throughput, reduce costs and improve production quality.

Growing Partner Ecosystem Supports Metropolis 

Metropolis for Factories can be deployed from the enterprise industrial edge to the cloud, and a large and growing ecosystem of partners is helping bring it to market.

A host of specialists are joining forces on this effort including sensor makers, application partners, inspection equipment makers and integration partners.

Basler, a leading maker of imaging sensors and systems, has partnered with NVIDIA to help developers build AI-enabled inspection systems faster through tighter integration with the NVIDIA DeepStream SDK.

Quantiphi, a Metropolis partner, is working with one of the world’s largest beverage producers to automate inspections of fully packed pallets with GPU-powered vision AI.

Overview and Advantech — both NVIDIA Metropolis partners — are collaborating to build a real-time AI-based inspection system to support industrial inspection, product counting and assembly verification.

Metropolis partners Siemens and Data Monsters are working together to build industrial inspection systems, bringing together Omniverse Replicator synthetic data generation, NVIDIA TAO training, DeepStream runtime and Siemens’ NVIDIA Jetson-powered industrial personal computers.

Learn more about NVIDIA Metropolis for Factories.

Read More

NVIDIA Brings New Generative AI Capabilities, Groundbreaking Performance to 100 Million Windows RTX PCs and Workstations

NVIDIA Brings New Generative AI Capabilities, Groundbreaking Performance to 100 Million Windows RTX PCs and Workstations

Generative AI is rapidly ushering in a new era of computing for productivity, content creation, gaming and more. Generative AI models and applications — like NVIDIA NeMo and DLSS 3 Frame Generation, Meta LLaMa, ChatGPT, Adobe Firefly and Stable Diffusion — use neural networks to identify patterns and structures within existing data to generate new and original content.

When optimized for GeForce RTX and NVIDIA RTX GPUs, which offer up to 1,400 Tensor TFLOPS for AI inferencing, generative AI models can run up to 5x faster than on competing devices. This is thanks to Tensor Cores — dedicated hardware in RTX GPUs built to accelerate AI calculations — and regular software improvements. Enhancements introduced last week at the Microsoft Build conference doubled performance for generative AI models, such as Stable Diffusion, that take advantage of new DirectML optimizations.

As more AI inferencing happens on local devices, PCs will need powerful yet efficient hardware to support these complex tasks. To meet this need, RTX GPUs will add Max-Q low-power inferencing for AI workloads. The GPU will operate at a fraction of the power for lighter inferencing tasks, while scaling up to unmatched levels of performance for heavy generative AI workloads.

To create new AI applications, developers can now access a complete RTX-accelerated AI development stack running on Windows 11, making it easier to develop, train and deploy advanced AI models. This starts with development and fine-tuning of models with optimized deep learning frameworks available via Windows Subsystem for Linux.

Developers can then move seamlessly to the cloud to train on the same NVIDIA AI stack, which is available from every major cloud service provider. Next, developers can optimize the trained models for fast inferencing with tools like the new Microsoft Olive. And finally, they can deploy their AI-enabled applications and features to an install base of over 100 million RTX PCs and workstations  that have been optimized for AI.

“AI will be the single largest driver of innovation for Windows customers in the coming years,” said Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft. “By working in concert with NVIDIA on hardware and software optimizations, we’re equipping developers with a transformative, high-performance, easy-to-deploy experience.”

To date, over 400 RTX AI-accelerated apps and games have been released, with more on the way.

During his keynote address kicking off COMPUTEX 2023, NVIDIA founder and CEO Jensen Huang introduced a new generative AI to support game development, NVIDIA Avatar Cloud Engine (ACE) for Games.

This custom AI model foundry service transforms games by bringing intelligence to non-playable characters through AI-powered natural language interactions. Developers of middleware, tools and games can use ACE for Games to build and deploy customized speech, conversation and animation AI models in their software and games.

Generative AI on RTX, Anywhere

From servers to the cloud to devices, generative AI running on RTX GPUs is everywhere. NVIDIA’s accelerated AI computing is a low-latency, full-stack endeavor. We’ve been optimizing every part of our hardware and software architecture for many years for AI, including fourth-generation Tensor Cores — dedicated AI hardware on RTX GPUs.

Regular driver optimizations ensure peak performance. The most recent NVIDIA driver, combined with Olive-optimized models and updates to DirectML, delivers significant speedups for developers on Windows 11. For example, Stable Diffusion performance is improved by 2x compared to the previous interference times for developers taking advantage of DirectML optimized paths.

And with the latest generation of RTX laptops and mobile workstations built on the NVIDIA Ada Lovelace architecture, users can take generative AI anywhere. Our next-gen mobile platform brings new levels of performance and portability — in form factors as small as 14 inches and as lightweight as about three pounds. Makers like Dell, HP, Lenovo and ASUS are pushing the generative AI era forward, backed by RTX GPUs and Tensor Cores.

“As AI continues to get deployed across industries at an expected annual growth rate of over 37% now through 2030, businesses and consumers will increasingly need the right technology to develop and implement AI, including generative AI. Lenovo is uniquely positioned to empower generative AI spanning from devices to servers to the cloud, having developed products and solutions for AI workloads for years. Our NVIDIA RTX GPU-powered PCs, such as select Lenovo ThinkPad, ThinkStation, ThinkBook, Yoga, Legion and LOQ devices, are enabling the transformative wave of generative AI for better everyday user experiences in saving time, creating content, getting work done, gaming and more.” — Daryl Cromer, vice president and chief technology officer of PCs and Smart Devices at Lenovo

“Generative AI is transformative and a catalyst for future innovation across industries. Together, HP and NVIDIA equip developers with incredible performance, mobility and the reliability needed to run accelerated AI models today, while powering a new era of generative AI.” —  Jim Nottingham, senior vice president and general manager of Z by HP

“Our recent work with NVIDIA on Project Helix centers on making it easier for enterprises to build and deploy trustworthy generative AI on premises. Another step in this historic moment is bringing generative AI to PCs. Think of app developers looking to perfect neural network algorithms while keeping training data and IP under local control. This is what our powerful and scalable Precision workstations with NVIDIA RTX GPUs are designed to do. And as the global leader in workstations, Dell is uniquely positioned to help users securely accelerate AI applications from the edge to the datacenter.” — Ed Ward, president of the client product group at Dell Technologies

“The generative AI era is upon us, requiring immense processing and fully optimized hardware and software. With the NVIDIA AI platform, including NVIDIA Omniverse, which is now preinstalled on many of our products, we are excited to see the AI revolution continue to take shape on ASUS and ROG laptops.” — Galip Fu, director of global consumer marketing at ASUS

Soon, laptops and mobile workstations with RTX GPUs will get the best of both worlds. AI inference-only workloads will be optimized for Tensor Core performance while keeping power consumption of the GPU as low as possible, extending battery life and maintaining a cool, quiet system. The GPU can then dynamically scale up for maximum AI performance when the workload demands it.

Developers can also learn how to optimize their applications end-to-end to take full advantage of GPU-acceleration via the NVIDIA AI for accelerating applications developer site.

Read More

NVIDIA CEO Tells NTU Grads to Run, Not Walk — But Be Prepared to Stumble

NVIDIA CEO Tells NTU Grads to Run, Not Walk — But Be Prepared to Stumble

“You are running for food, or you are running from becoming food. And often times, you can’t tell which. Either way, run.”

NVIDIA founder and CEO Jensen Huang today urged graduates of National Taiwan University to run hard to seize the unprecedented opportunities that AI will present, but embrace the inevitable failures along the way.

Whatever you pursue, he told the 10,000 graduates of the island’s premier university, do it with passion and conviction — and stay humble enough to learn the hard lessons that await.

“Whatever it is, run after it like we did. Run. Don’t walk,” Huang said, having swapped his signature black leather jacket for a black graduation robe, with the school’s plum-blossom emblem highlighting a royal blue, white and aqua collar.

“Remember, either you are running for food; or you are running from becoming food. And often times, you can’t tell which. Either way, run.”

Huang, who moved from Taiwan when he was young, recognized his parents in the audience, and shared three stories of initial failures and retreat. He called them instrumental in helping forge NVIDIA’s character during its three-decade journey from a three-person gaming-graphics startup to a global AI leader worth nearly a trillion dollars.

“I was … successful — until I started NVIDIA,” he said. “At NVIDIA, I experienced failures — great big ones. All humiliating and embarrassing. Many nearly doomed us.”

The first involved a key early contract the company won to help Sega build a gaming console. Rapid changes in the industry forced NVIDIA to give up the contract in a near-death brush with bankruptcy, which Sega’s leadership helped avert.

“Confronting our mistake and, with humility, asking for help saved NVIDIA,” he said.

The second was the decision in 2007 to put CUDA into all the company’s GPUs, enabling them to crunch data in addition to handling 3D graphics. It was an expensive, long-term investment that drew much criticism didn’t pay off for years until the chips started being used for machine learning.

“Our market cap hovered just above a billion dollars,” he recalled. “We suffered many years of poor performance. Our shareholders were skeptical of CUDA and preferred we improve profitability.”

The third was the decision in 2010 to charge into the promising mobile-phone market as graphics-rich capabilities were coming into reach. The market quickly commoditized, though, and NVIDIA retreated just as quickly, taking initial heat but opening the door to investing in promising new markets — robotics and self-driving cars.

“Our strategic retreat paid off,” he said. “By leaving the phone market, we opened our minds to invent a new one.”

Huang told grads that of the parallels in terms of boundless promise between the world he entered upon graduating four decades ago, on the cusp of the PC revolution, and the brave new age of AI they are entering today.

“For your journey, take along some of my learnings,” he said. Admit mistakes and ask for help; endure pain and suffering to realize your dreams; and make sacrifices to dedicate yourself to a life of purpose.

Read More

Cool It: Team Tackles the Thermal Challenge Data Centers Face

Cool It: Team Tackles the Thermal Challenge Data Centers Face

Two years after he spoke at a conference detailing his ambitious vision for cooling tomorrow’s data centers, Ali Heydari and his team won a $5 million grant to go build it.

It was the largest of 15 awards in May from the U.S. Department of Energy. The DoE program, called COOLERCHIPS, received more than 100 applications from a who’s who list of computer architects and researchers.

“This is another example of how we’re rearchitecting the data center,” said Ali Heydari, a distinguished engineer at NVIDIA who leads the project and helped deploy more than a million servers in previous roles at Baidu, Twitter and Facebook.

“We celebrated on Slack because the team is all over the U.S.,” said Jeremy Rodriguez, who once built hyperscale liquid-cooling systems and now manages NVIDIA’s data center engineering team.

A Historic Shift

The project is ambitious and comes at a critical moment in the history of computing.

Processors are expected to generate up to an order of magnitude more heat as Moore’s law hits the limits of physics, but the demands on data centers continue to soar.

Soon, today’s air-cooled systems won’t be able to keep up. Current liquid-cooling techniques won’t be able to handle the more than 40 watts per square centimeter researchers expect future silicon in data centers will need to dissipate.

So, Heydari’s group defined an advanced liquid-cooling system.

Their approach promises to cool a data center packed into a mobile container, even when it’s placed in an environment up to 40 degrees Celsius and is drawing 200kW — 25x the power of today’s server racks.

It will cost at least 5% less and run 20% more efficiently than today’s air-cooled approaches. It’s much quieter and has a smaller carbon footprint, too.

“That’s a great achievement for our engineers who are very smart folks,” he said, noting part of their mission is to make people aware of the changes ahead.

A Radical Proposal

The team’s solution combines two technologies never before deployed in tandem.

First, chips will be cooled with cold plates whose coolant evaporates like sweat on the foreheads of hard-working processors, then cools to condense and re-form as liquid. Second, entire servers, with their lower power components, will be encased in hermetically sealed containers and immersed in coolant.

Diagram of NVIDIA's liquid cooling design for data centers
Novel solution: Servers will be bathed in coolants as part of the project.

They will use a liquid common in refrigerators and car air conditioners, but not yet used in data centers.

Three Giant Steps

The three-year project sets annual milestones — component tests next year, a partial rack test a year later, and a full system tested and delivered at the end.

Icing the cake, the team will create a full digital twin of the system using NVIDIA Omniverse, an open development platform for building and operating metaverse applications.

The NVIDIA team consists of about a dozen thermal, power, mechanical and systems engineers, some dedicated to creating the digital twin. They have help from seven partners:

  • Binghamton and Villanova universities in analysis, testing and simulation
  • BOYD Corp. for the cold plates
  • Durbin Group for the pumping system
  • Honeywell to help select the refrigerant
  • Sandia National Laboratory in reliability assessment, and
  • Vertiv Corp. in heat rejection

“We’re extending relationships we’ve built for years, and each group brings an array of engineers,” said Heydari.

Of course, it’s hard work, too.

For instance, Mohammed Tradat, a former Binghamton researcher who now heads an NVIDIA data center mechanical engineering group, “had a sleepless night working on the grant application, but it’s a labor of love for all of us,” he said.

Heydari said he never imagined the team would be bringing its ideas to life when he delivered a talk on them in late 2021.

“No other company would allow us to build an organization that could do this kind of work — we’re making history and that’s amazing,” said Rodriguez.

See how digital twins, built in Omniverse, help optimize the design of a data center in the video below.

Picture at top: Gathered recently at NVIDIA headquarters are (from left) Scott Wallace (NVIDIA), Greg Strover (Vertiv), Vivien Lecoustre (DoE), Vladimir Troy (NVIDIA), Peter Debock (COOLERCHIPS program director), Rakesh Radhakrishnan (DoE), Joseph Marsala (Durbin Group), Nigel Gore (Vertiv), and Jeremy Rodriguez, Bahareh Eslami, Manthos Economou, Harold Miyamura and Ali Heydari (all of NVIDIA).

Read More

Butterfly Effects: Digital Artist Uses AI to Engage Exhibit Goers

Butterfly Effects: Digital Artist Uses AI to Engage Exhibit Goers

For about six years, AI has been an integral part of the artwork of Dominic Harris, a London-based digital artist who’s about to launch his biggest exhibition to date.

“I use it for things like giving butterflies a natural sense of movement,” said Harris, whose typical canvas is an interactive computer display.

Using a rack of NVIDIA’s latest GPUs in his studio, Harris works with his team of more than 20 designers, developers and other specialists to create artworks like Unseen. It renders a real-time collage of 13,000 butterflies — some fanciful, each unique, but none real. Exhibit-goers can make them flutter or change color with a gesture.

Unseen, AI-inspired artwork by Dominic harris
The Unseen exhibit includes a library of 13,000 digital butterflies.

The work attracted experts from natural history museums worldwide. Many were fascinated by the way it helps people appreciate the beauty and fragility of nature by inviting them to interact with creatures not yet discovered or yet to be born.

“AI is a tool in my palette that supports the ways I try to create a poignant human connection,” he said.

An Artist’s View of AI

Harris welcomes the public fascination with generative AI that sprang up in the past year, though it took him by surprise.

“It’s funny that AI in art has become such a huge topic because, even a year ago, if I told someone there’s AI in my art, they would’ve had a blank face,” he said.

Looking forward, AI will assist, not replace, creative people, Harris said.

“With each performance increase from NVIDIA’s products, I’m able to augment what I can express in a way that lets me create increasingly incredible original artworks,” he said.

A Living Stock Exchange

Combining touchscreens, cameras and other sensors, he aims to create connections between his artwork and people who view and interact with them.

For instance, Limitless creates an eight-foot interactive tower made up of gold blocks animated by a live data feed from the London Stock Exchange. Each block represents a company, shining or tarnished, by its current rising or falling valuation. Touching a tile reveals the face of the company’s CEO, a reminder that human beings drive the economy.

Limitless, an AI-inspired artwork by Dominic Harris
Harris with “Limitless,” a living artwork animated in part with financial market data.

It’s one work in Feeding Consciousness, Harris’ largest exhibition to date, opening Thursday, May 25, at London’s Halcyon Gallery.

Booting Up Invitations

“Before the show even opened, it got extended,” he said, showing invitations that went out on small tablets loaded with video previews.

The NVIDIA Jetson platform for edge AI and robotics “features prominently in the event and has become a bit of a workhorse for me in many of my artworks,” he said.

An immersive space in the "Feeding Consciousness" exhibition by Dominic Harris
An immersive space in the “Feeding Consciousness” exhibit relies on NVIDIA’s state-of-the-art graphics.

Three years in the making, the new exhibit includes one work that uses 180 displays. It also sports an immersive space created with eight cameras, four laser rangefinders and four 4K video projectors.

“I like building unique canvases to tell stories,” he said.

Endurace, a digital artwork by Dominic Harris
Harris puts the viewer in control of Antarctic landscapes in “Endurance.”

For example, Endurance depicts polar scenes Sir Ernest Shackleton’s expedition trekked through when their ship got trapped in the ice pack off Antarctica in 1915. All 28 men survived, and the sunken ship was discovered last year while Harris was working on his piece.

Harris and one of his artworks made using technologies from NVIDIA
Harris encounters a baby polar bear from an artwork.

“I was inspired by men who must have felt miniscule before the forces of nature, and the role reversal, 110 years later, now that we know how fragile these environments really are,” he said.

Writing Software at Six

Harris started coding at age six. When his final project in architecture school — an immersive installation with virtual sound — won awards at University College London, it set the stage for his career as a digital artist.

Along the way, “NVIDIA was a name I grew up with, and graphics cards became a part of my palette that I’ve come to lean on more and more — I use a phenomenal amount of processing power rendering some of  my works,” he said.

For example, next month he’ll install Every Wing Has a Silver Lining, a 16-meter-long work that displays 30,000 x 2,000 pixels, created in part with GeForce RTX 4090 GPUs.

“We use the highest-end hardware to achieve an unbelievable level of detail,” he said.

He shares his passion in school programs, giving children a template which they can use to draw butterflies that he later brings to life on a website.

“It’s a way to get them to see and embrace art in the technology they’re growing up with,” he said, comparing it to NVIDIA Canvas, a digital drawing tool his six- and 12-year-old daughters love to use.

The Feeding Consciousness exhibition, previewed in the video below, runs from May 25 to August 13 at London’s Halcyon Gallery.

 

Read More

Three More Xbox PC Games Hit GeForce NOW

Three More Xbox PC Games Hit GeForce NOW

Keep the NVIDIA and Microsoft party going this GFN Thursday with Grounded, Deathloop and Pentiment  now available to stream for GeForce NOW members this week.

These three Xbox Game Studio titles are part of the dozen additions to the GeForce NOW library.

Triple Threat

NVIDIA and Microsoft’s partnership continues to flourish with this week’s game additions.

Grounded on GeForce NOW
What is this, a game for ants?!

Who shrunk the kids? Grounded from Obsidian Entertainment is an exhilarating, cooperative survival-adventure. The world of Grounded is a vast, beautiful and dangerous place — especially when you’ve been shrunken to the size of an ant. Explore, build and thrive together alongside the hordes of giant insects, fighting to survive the perils of a vast and treacherous backyard.

Pentiment on GeForce NOW
Unravel a web of deceit.

Also from Obsidian is historical narrative-focused Pentiment, the critically acclaimed role-playing game featured on multiple Game of the Year lists in 2022. Step into a living illustrated world inspired by illuminated manuscripts — when Europe is at a crossroads of great religious and political change. Walk in the footsteps of Andreas Maler, a master artist amidst murders, scandals and intrigue in the Bavarian Alps. Impact a changing world and see the consequences of your decisions in this narrative adventure.

Deathloop on GeForce NOW
If at first you don’t succeed, die, die and die again.

DEATHLOOP  is a next-gen first-person shooter from ArkaneLyon, the award-winning studio behind the Dishonored franchise. In DEATHLOOP, two rival assassins are trapped in a time loop on the island of Blackreef, doomed to repeat the same day for eternity. The only chance for escape is to end the cycle by assassinating eight key targets before the day resets. Learn from each cycle, try new approaches and break the loop. The game also includes support for RTX ray tracing for Ultimate and Priority members.

These three Xbox titles join Gears 5 as supported games on GeForce NOW. Members can stream these or more than 1,600 others in the GeForce NOW library.

Priority members can play at up to 1080p 60 frames per second and skip the waiting lines, and Ultimate members can play at up to 4K 120 fps on PC and Mac.

Play across nearly any device — including Chromebooks, mobile devices, SHIELD TVs and supported smart TVs. Learn more about support for Xbox PC games on GeForce NOW.

More Adventures

Lord of the Rings Gollum on GeForce NOW
Start your Middle-earth journey in the cloud.

Middle-earth calls, as The Lord of the Rings: Gollum comes to GeForce NOW. Embark on a captivating interactive experience in this action-adventure game that unfolds parallel to the events of The Fellowship of the Ring. Assume the role of the enigmatic Gollum on a treacherous journey, discovering how he outsmarted the most formidable characters in Middle-earth. Priority and Ultimate members can experience the epic story with support for RTX ray tracing and DLSS technology.

In addition, members can look for the following:

  • Blooming Business: Casino (New release on Steam, May 23)
  • Plane of Lana (New release on Steam, May 23)
  • Warhammer 40,000: Boltgun (New release on Steam, May 23)
  • Above Snakes (New release on Steam, May 25)
  • Railway Empire 2 (New release on Steam, May 25)
  • The Lord of the Rings: Gollum (New release on Steam, May 25)
  • Deathloop (Steam)
  • Grounded (Steam)
  • Lawn Mowing Simulator (Steam)
  • Pentiment (Steam)
  • The Ascent (Steam)
  • Patch Quest (Steam)

Warhammer Skulls Festival on GeForce NOW

The Warhammer Skulls Festival is live today. Check it out for information about upcoming games in the Warhammer franchise, plus discounts on Warhammer titles on Steam and Epic Games Store. Stay up to date on these and other discounts through the GeForce NOW app.

Finally, we’ve got a question for you this week. Let us know what mischief you’d be up to on Twitter or in the comments below.

Read More

Livestreaming Bliss: Wander Warwick’s World This Week ‘In the NVIDIA Studio’

Livestreaming Bliss: Wander Warwick’s World This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

The GeForce RTX 4060 Ti 8GB GPU — part of the GeForce RTX 4060 family announced last week — is now available, starting at $399, from top add-in card providers including ASUS, Colorful, Galax, GIGABYTE, INNO3D, MSI, Palit, PNY and ZOTAC, as well as from system integrators and builders worldwide.

GeForce RTX 4060 Ti 8GB is available now from a range of providers.

GeForce RTX 40 Series GPUs come backed by NVIDIA Studio technologies, including hardware acceleration for 3D, video and AI workflows; optimizations for RTX hardware in over 110 of the most popular creative apps; and exclusive Studio apps like Omniverse, Broadcast and Canvas.

Plus, enhancements for NVIDIA Studio-powered creator apps keep coming in. MAGIX VEGAS Pro software for video editing is receiving a major AI overhaul that will boost performance for all GeForce RTX users.

And prepare to be inspired by U.K.-based livestreamer Warwick, equal parts insightful and inspirational, as they share their AI-based workflow powered by a GeForce RTX GPU and the NVIDIA Broadcast app, this week In the NVIDIA Studio.

At the Microsoft Build conference today NVIDIA unveiled new tools for developers that will make it easier and faster to train and deploy advanced AI on Windows 11 PCs with RTX GPUs.

In addition, the Studio team wants to see how creators #SetTheScene, whether for an uncharted virtual world or a small interior diorama of a room.

Enter the #SetTheScene Studio community challenge. Post original environment art on Facebook, Twitter or Instagram, and use the hashtag #SetTheScene for a chance to be featured on the @NVIDIAStudio or @NVIDIAOmniverse social channels.

VEGAS Pro Gets an AI Assist Powered by RTX

NVIDIA Studio collaborated with MAGIX VEGAS Pro to accelerate AI model performance on Windows PCs with extraordinary results.

VEGAS Pro 20 update 3, released this month, increases the speed of AI effects — such as style transfer, AI upscaling and colorization — with NVIDIA RTX GPUs.

Shorter times are better. Tested on GeForce RTX 4090 GPU, Intel Core i9-12900K with UHD 770.

Style transfer, for example, uses AI to instantly bring to pieces the style of famous artists such as Picasso or van Gogh with a staggering 219% performance increase over the previous version.

Warwick’s World

As this week’s featured In the NVIDIA Studio artist would say, “Welcome to the channnnnnnel!” Warwick is a U.K.-based content streamer who enjoys coffee, Daft Punk, tabletop role-playing games and cats. Alongside their immense talent and wildly entertaining persona lies an extraordinary superpower: empathy.

 

Warwick, like the rest of the world, had to find new ways to connect with people during the pandemic. They decided to pursue streaming as a way to build a community. Their vision was to create a channel that provides laughter and joy, escapism during stressful times and a safe haven for love and expression.

“It’s okay not to be okay,” stressed Warwick. “I’ve lived a lot of my life being told I couldn’t feel a certain way, show emotion or let things get me down. I was told that those were weaknesses that I needed to fight, when in reality they’re our truest strengths: being true to ourselves, feeling and being honest with our emotions.”

Warwick finds inspiration in making a positive contribution to other people’s lives. The thousands of subs speak for themselves.

 

But there are always ways to improve the quality of streams — plus, working and streaming full time can be challenging, as “it can be tough to get all your ideas to completion,” Warwick said.

For maximum efficiency, Warwick deploys their GeForce RTX 3080 GPU, taking advantage of the seventh-generation NVIDIA encoder (NVENC) to independently encode video, which frees up the graphics card to focus on livestreaming.

“NVIDIA is highly regarded in content-creation circles. Using OBS, Adobe Photoshop and Premiere Pro is made better by GeForce GPUs!” — Warwick

“I honestly can’t get enough of it!” said the streamer. “Being able to stream with OBS Studio software using NVENC lets me play the games I want at the quality I want, with other programs running to offer quality content to my community.”

Warwick has also experimented with the NVIDIA Broadcast app, which magically transforms dorms, home offices and more into home studios. They said the Eye Contact effect had “near-magical” results.

“Whenever I need to do ad reads, I find it incredible how well Eye Contact works, considering it’s in beta!” said Warwick. “I love the other Broadcast features that are offered for content creators and beyond.”

Warwick will be a panelist on an event hosted by Top Tier Queer (TTQ), an initiative that celebrates queer advocates in the creator space.

Sponsored by NVIDIA Studio and organized by In the NVIDIA Studio artist WATCHOLLIE, the TTQ event in June will serve as an avenue for queer visibility and advocacy, as well as an opportunity to award one participant with prizes, including a GeForce RTX 3090 GPU, to help amplify their voice even further. Apply for the TTQ initiative now.

Streaming is deeply personal for Warwick. “In my streams and everything I create, I aim to inspire others to know their feelings are valid,” they said. “And because of that, I feel the community that I have really appreciates me and the space that I give them.”

Livestreamer Warwick.

Subscribe to Warwick’s Twitch channel for more content.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AI

NVIDIA and Microsoft Drive Innovation for Windows PCs in New Era of Generative AI

Generative AI — in the form of large language model (LLM) applications like ChatGPT, image generators such as Stable Diffusion and Adobe Firefly, and game rendering techniques like NVIDIA DLSS 3 Frame Generation — is rapidly ushering in a new era of computing for productivity, content creation, gaming and more.

At the Microsoft Build developer conference, NVIDIA and Microsoft today showcased a suite of advancements in Windows 11 PCs and workstations with NVIDIA RTX GPUs to meet the demands of generative AI.

More than 400 Windows apps and games already employ AI technology, accelerated by dedicated processors on RTX GPUs called Tensor Cores. Today’s announcements, which include tools to develop AI on Windows PCs, frameworks to optimize and deploy AI, and driver performance and efficiency improvements, will empower developers to build the next generation of Windows apps with generative AI at their core.

“AI will be the single largest driver of innovation for Windows customers in the coming years,” said Pavan Davuluri, corporate vice president of Windows silicon and system integration at Microsoft. “By working in concert with NVIDIA on hardware and software optimizations, we’re equipping developers with a transformative, high-performance, easy-to-deploy experience.”

Develop Models With Windows Subsystem for Linux

AI development has traditionally taken place on Linux, requiring developers to either dual-boot their systems or use multiple PCs to work in their AI development OS while still accessing the breadth and depth of the Windows ecosystem.

Over the past few years, Microsoft has been building a powerful capability to run Linux directly within the Windows OS, called Windows Subsystem for Linux (WSL). NVIDIA has been working closely with Microsoft to deliver GPU acceleration and support for the entire NVIDIA AI software stack inside WSL. Now developers can use Windows PC for all their local AI development needs with support for GPU-accelerated deep learning frameworks on WSL.

With NVIDIA RTX GPUs delivering up to 48GB of RAM in desktop workstations, developers can now work with models on Windows that were previously only available on servers. The large memory also improves the performance and quality for local fine-tuning of AI models, enabling designers to customize them to their own style or content. And because the same NVIDIA AI software stack runs on NVIDIA data center GPUs, it’s easy for developers to push their models to Microsoft Azure Cloud for large training runs.

Rapidly Optimize and Deploy Models

With trained models in hand, developers need to optimize and deploy AI for target devices.

Microsoft released the Microsoft Olive toolchain for optimization and conversion of PyTorch models to ONNX, enabling developers to automatically tap into GPU hardware acceleration such as RTX Tensor Cores. Developers can optimize models via Olive and ONNX, and deploy Tensor Core-accelerated models to PC or cloud. Microsoft continues to invest in making PyTorch and related tools and frameworks work seamlessly with WSL to provide the best AI model development experience.

Improved AI Performance, Power Efficiency

Once deployed, generative AI models demand incredible inference performance. RTX Tensor Cores deliver up to 1,400 Tensor TFLOPS for AI inferencing. Over the last year, NVIDIA has worked to improve DirectML performance to take full advantage of RTX hardware.

On May 24, we’ll release our latest optimizations in Release 532.03 drivers that combine with Olive-optimized models to deliver big boosts in AI performance. Using an Olive-optimized version of the Stable Diffusion text-to-image generator with the popular Automatic1111 distribution, performance is improved over 2x with the new driver.

Chart showing performance improvements in Stable Diffusion with updated NVIDIA drivers.
Stable Diffusion performance tested on GeForce RTX 4090 using Automatic1111 and Text-to-Image function.

With AI coming to nearly every Windows application, efficiently delivering inference performance is critical — especially for laptops. Coming soon, NVIDIA will introduce new Max-Q low-power inferencing for AI-only workloads on RTX GPUs. It optimizes Tensor Core performance while keeping power consumption of the GPU as low as possible, extending battery life and maintaining a cool, quiet system.  The GPU can then dynamically scale up for maximum AI performance when the workload demands it.

Join the PC AI Revolution Now

Top software developers — like Adobe, DxO, ON1 and Topaz — have already incorporated NVIDIA AI technology with more than 400 Windows applications and games optimized for RTX Tensor Cores.

“AI, machine learning and deep learning power all Adobe applications and drive the future of creativity. Working with NVIDIA we continuously optimize AI model performance to deliver the best possible experience for our Windows users on RTX GPUs.” — Ely Greenfield, CTO of digital media at Adobe

“NVIDIA is helping to optimize our WinML model performance on RTX GPUs, which is accelerating the AI in DxO DeepPRIME, as well as providing better denoising and demosaicing, faster.” — Renaud Capolunghi, senior vice president of engineering at DxO

“Working with NVIDIA and Microsoft to accelerate our AI models running in Windows on RTX GPUs is providing a huge benefit to our audience. We’re already seeing 1.5x performance gains in our suite of AI-powered photography editing software.” — Dan Harlacher, vice president of products at ON1

“Our extensive work with NVIDIA has led to improvements across our suite of photo- and video-editing applications. With RTX GPUs, AI performance has improved drastically, enhancing the experience for users on Windows PCs.” — Suraj Raghuraman, head of AI engine development at Topaz Labs

NVIDIA and Microsoft are making several resources available for developers to test drive top generative AI models on Windows PCs. An Olive-optimized version of the Dolly 2.0 large language model is available on Hugging Face. And a PC-optimized version of NVIDIA NeMo large language model for conversational AI is coming soon to Hugging Face.

Developers can also learn how to optimize their applications end-to-end to take full advantage of GPU-acceleration via the NVIDIA AI for accelerating applications developer site.

The complementary technologies behind Microsoft’s Windows platform and NVIDIA’s dynamic AI hardware and software stack will help developers quickly and easily develop and deploy generative AI on Windows 11.

Microsoft Build runs through Thursday, May 25. Tune into to learn more on shaping the future of work with AI.

Read More

No Programmers? No Problem: READY Robotics Simplifies Robot Coding, Rollouts

No Programmers? No Problem: READY Robotics Simplifies Robot Coding, Rollouts

Robotics hardware traditionally requires programmers to deploy it. READY Robotics wants to change that with its “no code” software aimed at people working in manufacturing who haven’t got programming skills.

The Columbus, Ohio, startup is a spinout of robotics research from Johns Hopkins University. Kel Guerin was a PhD candidate there leading this research when he partnered with Benjamin Gibbs, who was at Johns Hopkins Technology Ventures, to land funding and pursue the company, now led by Gibbs as CEO.

“There was this a-ha moment where we figured out that we could take these types of visual languages that are very easy to understand and use them for robotics,” said Guerin, who’s now chief innovation officer at the startup.

READY’s  “no code” ForgeOS operating system is designed to enable anyone to program any type of robot hardware or automation device. ForgeOS works seamlessly with plug-ins for most major robot hardware, and similar to other operating systems, like Android, it allows running third-party apps and plugins, providing a robust ecosystem of partners and developers working to make robots more capable, says Guerin.

Implementing apps in robotics allows for new capabilities to be added to a robotic system in a few clicks, improving user experience and usability. Users can install their own apps, such as Task Canvas, which provides an intuitive building block programming interface similar to Scratch, a simple block-based visual language for kids developed at MIT Media Lab, which was influential in its design.

Task Canvas allows users to show the actions of the robot, as well as all the other devices in an automation cell (such as grippers, programmable logic controllers, and machine tools) as blocks in a flow chart. The user can easily create powerful logic by tying these blocks together — without writing a single line of code. The interface offers nonprogrammers a more “drag-and-drop” experience for programming and deploying robots, whether working directly on the factory floor with real robots on a tablet device or with access to simulation from Isaac Sim, powered by NVIDIA Omniverse.

 

Robot System Design in Simulation for Real-World Deployments 

READY is making robotics system design easier for nonprogrammers, helping to validate robots and systems for accelerated deployments.

The company is developing Omniverse Extensions — Omniverse kit applications based on Isaac Sim — and can deploy them on the cloud. It uses Omniverse Nucleus — the platform’s database and collaboration engine — in the cloud as well.

Isaac Sim is an application framework that enables simulation training for testing out robots in virtual manufacturing lines before deployment into the real world.

“Bigger companies are moving to a sim-first approach to automation because these systems cost a lot of money to install. They want to simulate them first to make sure it’s worth the investment,” said Guerin.

The startup charges users of its platform licensing per software seat and also offers support services to help roll out and develop systems.

It’s a huge opportunity. Roughly 90 percent of the world’s factories haven’t yet embraced automation, which is a trillion-dollar market.

READY is a member of NVIDIA Inception, a free program that provides startups with technical training, go-to-market support and AI platform guidance.

From Industrial Automation Giants to Stanley Black & Decker

The startup operates in an ecosystem of world-leading industrial automation providers, and these global partners are actively developing integrations with platforms like NVIDIA Omniverse and are investing in READY, said Guerin.

“Right now we are starting to work with large enterprise customers who want to automate but they can’t find the expertise to do it,” he said.

Stanley Black & Decker, a global supplier of tools, is relying on READY to automate machines, including CNC lathes and mills.

Robotic automation had been hard to deploy in their factory until Stanley Black & Decker started using READY’s ForgeOS with its Station setup, which makes it possible to deploy robots in a day.

Creating Drag-and-Drop Robotic Systems in Simulation 

READY is putting simulation capabilities into the hands of nonprogrammers, who can learn its Task Canvas interface for drag-and-drop programming of industrial robots in about an hour, according to the company.

The company also runs READY Academy, which offers a catalog of free training for manufacturing professionals to learn the skills to design, deploy, manage and troubleshoot robotic automation systems.

“For potential customers interested in our technology, being able to try it out with a robot simulated in Omniverse before they get their hands on the real thing — that’s something we’re really excited about,” said Guerin.

Learn more about NVIDIA Isaac Sim, Jetson Orin, Omniverse Enterprise.

 

Read More