Smart Devices, Smart Manufacturing: Pegatron Taps AI, Digital Twins

In the fast-paced field of making the world’s tech devices, Pegatron Corp. initially harnessed AI to gain an edge. Now, it’s on the cusp of creating digital twins to further streamline its efficiency.

Whether or not they’re familiar with the name, most people have probably used smartphones, tablets, Wi-Fi routers or other products that Taiwan-based Pegatron makes in nearly a dozen factories across seven countries. Last year, it made more than 10 million notebook computers.

Andrew Hsiao, associate vice president of Pegatron’s software R&D division, is leading the company’s move into machine learning and the 3D internet known as the metaverse.

Building an AI Platform

“We’ve been collecting factory data since 2012 to find patterns and insights that enhance operations,” said Hsiao, a veteran tech manager who’s been with the company for 14 years, since it spun out of ASUS, one of the world’s largest PC makers.

In 2016, Pegatron’s COO, Denese Yao, launched a task force to apply new technology to improve operations. Hsiao’s team of AI experts collaborated with factory workers to find use cases for AI. One of their first pilot projects used deep learning to detect anomalies in products as they came down the line.

It got solid results using modified versions of neural network models like ResNet, so they stepped on the gas.

Today, Pegatron uses Cambrian, an AI platform it built for automated inspection, deployed in most of its factories. It maintains hundreds of AI models, trained and running in production on NVIDIA GPUs.

Fewer Defects, More Consistency

The new platform catches up to 60% more defects with 30% fewer variations than human inspectors, and factory employees appreciate it.

“Manual inspection is a boring, repetitive job, so it’s not surprising employees don’t like it,” he said. “Now, we’re seeing employees motivated to learn about the new technology, so it’s empowering people to do more value-added work.”

The system may also improve throughput as factories adjust workflows on assembly and packing stations to account for faster inspection lines.

Models Deployed 50x Faster

Pegatron’s system uses NVIDIA A100 Tensor Core GPUs to deploy AI models up to 50x faster than when it trained them on workstations, cutting weeks of work down to a few hours.

“With our unified platform based on DGX, we have our data lake, datasets and training all in one place, so we can deploy a model in one click,” Hsiao said.

Using the Multi-Instance GPU capability in A100 GPUs, Pegatron cut developers’ wait time for access to an accelerator from nearly an hour to 30 seconds. “That lets us dynamically schedule jobs like AI inference and lightweight model training,” he said.

As part of its AI inference work, the system analyzes more than 10 million images a day using NVIDIA A40 and other GPUs.

Triton, NGC Simplify AI Jobs

Pegatron uses NVIDIA Triton Inference Server, open-source software that helps deploy, run and scale AI models across all types of processors, and frameworks. It works hand-in-hand with NVIDIA TensorRT, software that simplifies neural networks to reduce latency.

“Triton and TensorRT make it easy to serve multiple clients and convert jobs to the most cost-effective precision levels,” he said.

Hsiao’s team optimizes pretrained AI models it downloads in integrated Kubernetes containers from the NVIDIA NGC hub for GPU-optimized software.

“NGC is very helpful because we get with one click the deep learning frameworks and all the other software components we need, stuff that used to take us a lot of time to pull together,” he said.

Next Step: Digital Twins

Taking another step in smarter manufacturing, Pegatron is piloting NVIDIA Omniverse, a platform for developing digital twins

It has two use cases so far. First, testing Omniverse Replicator to generate synthetic data of what products coming down the inspection line might look like under different lighting conditions or orientations. This information will make its perception models smarter.

Second, it’s creating digital twins of inspection machines. That lets remote workers manage them remotely, have better insight into predictive maintenance and simulate software updates before deploying them to a physical machine.

“Today, when a system goes down, we can only check logs that might be incomplete, but with Omniverse, we can replay what happened to understand how to fix it, or, run simulations to predict how it will behave in the future,” he said.

Pegatron engineer monitors factory remotely with Omniverse
A Pegatron engineer monitors an inspection machine remotely with Omniverse.

What’s more, industrial engineers who care about throughput, automation engineers responsible for downtime, and equipment engineers who handle maintenance can work together on the same virtual system at the same time, even when logging in from different countries.

Vision of a Virtual Factory

If all goes well, Pegatron could have Omniverse available on its inspection machines before the end of the year.

Meanwhile, Hsiao is looking for partners who can help build virtual versions of a whole production line in Omniverse. Longer term, his vision is to create a digital twin of an entire factory.

“In my opinion, the greatest impact will come from building a full virtual factory so we can try out things like new ways to route products through the plant,” he said. “When you just build it out without a simulation first, your mistakes are very costly.”

The post Smart Devices, Smart Manufacturing: Pegatron Taps AI, Digital Twins appeared first on NVIDIA Blog.

Read More

AI Shows the Way: Seoul Robotics Helps Cars Move, Park on Their Own

Imagine driving a car — one without self-driving capabilities — to a mall, airport or parking garage, and using an app to have the car drive off to park itself.

Software company Seoul Robotics is using NVIDIA technology to make this possible — turning non-autonomous cars into self-driving vehicles.

Headquartered in Korea, the company’s initial focus is on improving first- and last-mile logistics such as parking. Its Level 5 Control Tower is a mesh network of sensors and computers placed on infrastructure around a facility, like buildings or light poles — rather than on individual cars — to capture an unobstructed view of the environment.

The system enables cars to move autonomously by directing their vehicle-to-everything, or so-called V2X, communication systems. These systems pass information from a vehicle to infrastructure, other vehicles, any surrounding entities — and vice versa. V2X technology, which comes standard in many modern cars, is used to improve road safety, traffic efficiency and energy savings.

Seoul Robotics’ platform, dubbed LV5 CTRL TWR, collects 3D data from the environment using cameras and lidar. Computer vision and deep learning-based AI analyze the data, determining the most efficient and safest paths for vehicles within the covered area.

Then, through V2X, the platform can manage a car’s existing features, such as adaptive-cruise-control, lane-keeping and brake-assist functions, to safely get it from place to place.

LV5 CTRL TWR is built using NVIDIA CUDA libraries for creating GPU-accelerated applications, as well as the Jetson AGX Orin module for high-performance AI at the edge. NVIDIA GPUs are used in the cloud for global fleet path planning.

Seoul Robotics is a member of NVIDIA Metropolis — a partner program centered on an application framework and set of developer tools that supercharge vision AI applications — and NVIDIA Inception, a free, global program that nurtures cutting-edge startups.

Autonomy Through Infrastructure

Seoul Robotics is pioneering a new path to level 5 autonomy, or full driving automation, with what’s known as “autonomy through infrastructure.”

“Instead of outfitting the vehicles themselves with sensors, we’re outfitting the surrounding infrastructure with sensors,” said Jerone Floor, vice president of product and solutions at Seoul Robotics.

Using V2X capabilities, LV5 CTRL TWR sends commands from infrastructure to cars, making vehicles turn right or left, move from point A to B, brake and more. It achieves an accuracy in positioning a car of plus or minus four centimeters.

“No matter how smart a vehicle is, if another car is coming from around a corner, for example, it won’t be able to see it,” Floor said. “LV5 CTRL TWR provides vehicles with the last bits of information gathered from having a holistic view of the environment, so they’re never ‘blind.’”

These communication protocols already exist in most vehicles, he added. LV5 CTRL TWR acts as the AI-powered brain of the instructive mechanisms, requiring nothing more than a firmware update in cars.

“From the beginning, we knew we needed deep learning in the system in order to achieve the really high performance required to reach safety goals — and for that, we needed GPU acceleration,” Floor said. “So, we designed the system from the ground up based on NVIDIA GPUs and CUDA.”

NVIDIA CUDA libraries help the Seoul Robotics team render massive amounts of data from the 3D sensors in real time, as well as accelerate training and inference for its deep learning models.

As a Metropolis member, Seoul Robotics received early access to software development kits and the NVIDIA Jetson AGX Orin for edge AI.

“The compute capabilities of Jetson AGX Orin allow us to have the LV5 CTRL TWR cover more area with a single module,” Floor added. “Plus, it handles a wide temperature range, enabling our system to work in both indoor and outdoor units, rain or shine.”

Deployment Across the Globe

LV5 CTRL TWR is in early commercial deployment at a BMW manufacturing facility in Munich.

According to Floor, cars must often change locations once they’re manufactured, from electrical repair stations to parking lots for test driving and more.

Equipped with LV5 CTRL TWR, the BMW facility has automated such movement of cars — resulting in time and cost savings. Automating car transfers also enhances safety for employees and frees them up to focus on other tasks, like headlight alignment and more, Floor said.

And from the moment a vehicle is fully manufactured until it’s delivered to the customer, it moves on average through up to seven parking lots. Moving cars manually costs manufacturers anywhere from $30 to $60, per car, per lot — meaning LV5 CTRL TWR can address a $30 billion market.

The technology behind LV5 CTRL TWR can be used across industries, Floor highlighted. Beyond automotive factories, Seoul Robotics envisions its platform to be deployed across the globe — at retail stores, airports, traffic intersections and more.

NVIDIA Jetson AGX Orin 32GB production modules are now available.

Learn more about NVIDIA Metropolis and apply to join NVIDIA Inception.

Feature image courtesy of BMW Group.

The post AI Shows the Way: Seoul Robotics Helps Cars Move, Park on Their Own appeared first on NVIDIA Blog.

Read More

Digital Art Professor Kate Parsons Inspires Next Generation of Creators This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.

Many artists can edit a video, paint a picture or build a model — but transforming one’s imagination into stunning creations can now involve breakthrough design technologies.

Kate Parsons, a digital art professor at Pepperdine University and this week’s featured In the NVIDIA Studio artist, helped bring a music video for How Do I Get to Invincible to life using virtual reality and NVIDIA GeForce RTX GPUs.

The project, a part of electronic music trio The Glitch Mob’s visual album See Without Eyes, quickly and seamlessly moved from 3D to VR, thanks to NVIDIA GPU acceleration.

We All FLOAT On

Parsons has blended her passions for art and technology as co-founder of FLOAT LAND, a pioneering studio that innovates across digital media, including video, as well as virtual and augmented reality.

She and co-founder Ben Vance collaborate on projects at the intersection of art and interactivity. They design engaging animations, VR art exhibits and futuristic interactive AR displays.

FLOAT LAND embraces the latest advances in GPU technology to push creative boundaries. Photo courtesy of United Nude.

When The Glitch Mob set out to turn See Without Eyes into a state-of-the-art visual album, the group tapped long-term collaborators Strangeloop Studios, who in turn reached out to FLOAT LAND to create art for the song How Do I Get to Invincible. Parsons and her team brought a dreamlike feel to the project.

Working with the team at Strangeloop Studios, FLOAT LAND created the otherworldly landscapes for How Do I Get to Invincible, harnessing the power of NVIDIA RTX GPUs.

FLOAT LAND is a collaborative studio focused on the intersection of art and interactivity, founded by Kate Parsons and Ben Vance. Photo by Nicole Gawalis.

“We have a long history of using NVIDIA GPUs due to our early work in the VR space,” Parsons said. “The early days of VR were a bit like the Wild West, and it was really important for us to have reliable systems — we consider NVIDIA GPUs to be a key part of our rigs.”

Where Dreams Become (Virtual) Reality

The FLOAT LAND team used several creative applications for the visual album. They began by researching techniques in real-time visual effects to work within Unity software. This included using custom shaders inspired by the Shadertoy computer graphics tool and exploring different looks to create a surreal mix of dark and moody.

Then, the artists built test terrains using Cinema 4D, a professional 3D animation, simulation and rendering solution, and Unity, a leading platform for creating and operating interactive, real-time 3D content, to explore post-effects like tilt shift, ambient occlusion and chromatic aberration. They also used the Unity plugin Fog Volume 3 to create rich, dynamic clouds to quickly explore many options.

 

Using NVIDIA RTX GPUs in Unity accelerated the work of Parsons’s team through advanced shading techniques. Plus, NVIDIA DLSS increased the interactivity of the viewport.

“Unity was central to our production process, and we iterated both in editor and in real time to get the look we wanted,” Parsons said. “Some of the effects really pushed the limits of our GPUs. It wouldn’t have been possible to work in real time without GPU acceleration – we would’ve had to render out clips, which takes anywhere from 10 to thousands of times longer.”

And like all great projects, even once it was done, the visual album wasn’t done, Parsons said. Working with virtual entertainment company Wave, FLOAT LAND’s work for the visual album was used to turn the entirety of the piece into a VR experience. Using the Unity and GPU-native groundwork greatly accelerated this process, Parsons added.

The Glitch Mob called it “a completely new way to experience music.”

Best in Class

When she isn’t making her own breathtaking creations, Parsons helps her students grow as creators. She teaches basic and advanced digital art at Pepperdine — including how to use emerging technologies to transform creative workflows.

“Many of my students get really obsessed with learning certain kinds of software — as if learning the software will automatically bypass the need to think creatively,” she said. “In this sense, software is just a tool.”

Parsons advises her students to try a bit of everything and see what sticks. “If there’s something you want to learn, spend about three weeks with it and see if it’s a tool that will be useful for you,” she said.

Vibrant Matter courtesy of FLOAT LAND.

While many of her projects dip into new, immersive fields like AR and VR, Parsons highlighted the importance of understanding the fundamentals, like workflows in Adobe Photoshop and Illustrator. “Students should learn the difference between bitmap and vector images early on,” she said.

Parsons works across multiple systems powered by NVIDIA GPUs — a Dell Alienware PC with a GeForce RTX 2070 GPU in her classroom; a custom PC with a GeForce RTX 2080 in her home office; and a Razer Blade 15 with a GeForce RTX 3070 Laptop GPU for projects on the go. When students ask which laptop they should use for their creative education, Parsons points them to NVIDIA Studio-validated PCs.

Parsons and Vance’s creative workspace, powered by an NVIDIA GeForce RTX 2070 GPU.

Creatives going back to school can start off on the right foot with an NVIDIA Studio-validated laptop. Whether for 3D modeling, VR, video and photo editing or any other creative endeavor, a powerful laptop is ready to be the backbone of creativity. Explore these task-specific recommendations for NVIDIA Studio laptops.

#CreatorJourney Challenge

In the spirit of learning, the NVIDIA Studio team is posing a challenge for the community to show off personal growth. Participate in the #CreatorJourney challenge for a chance to be showcased on NVIDIA Studio social media channels.

Entering is easy. Post an older piece of artwork alongside a more recent one to showcase your growth as an artist. Follow and tag NVIDIA Studio on Instagram, Twitter or Facebook, and use the #CreatorJourney tag to join.

Learn something new today: Access tutorials on the Studio YouTube channel and get creativity-inspiring updates directly to your inbox by subscribing to the NVIDIA Studio newsletter.

The post Digital Art Professor Kate Parsons Inspires Next Generation of Creators This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

From Sapling to Forest: Five Sustainability and Employment Initiatives We’re Nurturing in India

For over a decade, NVIDIA has invested in social causes and communities in India as part of our commitment to corporate social responsibility.

Bolstering those efforts, we’re unveiling this year’s investments in five projects that have been selected by the NVIDIA Foundation team, focused on the areas of environmental conservation, ecological restoration, social innovation and job creation.

The projects we’re supporting include:

Energy Harvest Charitable Trust

This project aims to reduce air pollution by preventing open-field burning and instead encouraging sustainable energy alternatives such as turning waste biomass into green fertilizer. The trust also promotes village-level entrepreneurship by integrating local farmers into the paddy straw value chain and supporting straw-based constructions in Rajpura, Punjab.

Sustainable Environment and Ecological Development Society

This initiative will restore the ecological conditions of 12.3 acres of mangroves and train 1,125 community members on disaster response to build community resilience in Sunderbans, West Bengal. The nonprofit partner will focus on bolstering local infrastructure and maintaining sanitation — all with an ecosystem-based approach.

Foundation for Ecological Security

We’re building on our existing partnership with the foundation by funding the construction of irrigation and water-harvesting structures in the Koraput district of Odisha. Last year’s work benefited 2,500 tribal households by promoting natural farming, which ensured food availability and increased vegetative cover for nearly 500 acres of land.

Naandi Foundation

Following a previous investment, our partnership with the Naandi Foundation will continue to build resilient farming communities in the Araku region of Andhra Pradesh. The project will train 3,000 farmers in Naandi’s Farmer Field Schools to earn sustained income by cultivating coffee and pepper plants using organic regenerative practices. Previous efforts trained over 3,000 farmers across 115 villages and resulted in the production and distribution of nearly 34,000 kilograms of coffee fruit.

Udyogini

This nonprofit strives for environmental conservation and women’s economic empowerment. Our funding will go toward conserving and restoring endangered medicinal and aromatic plants by training 600 Himalayan villagers — especially women — in sustainable cultivation, harvest and plant monitoring in Uttarakhand.

NVIDIA’s corporate social responsibility initiatives span the globe. In the last fiscal year, our joint efforts with employees led to a contribution of over $22.3 million to 5,700 nonprofits in 50+ countries around the world.

Read more about previous projects we’ve funded in India and corporate social responsibility at NVIDIA.

The post From Sapling to Forest: Five Sustainability and Employment Initiatives We’re Nurturing in India appeared first on NVIDIA Blog.

Read More

Top Israel Medical Center Partners with AI Startups to Help Detect Brain Bleeds, Other Critical Cases

Israel’s largest private medical center is working with startups and researchers to bring potentially life-saving AI solutions to real-world healthcare workflows.

With more than 1.5 million patients across eight medical centers, Assuta Medical Centers conduct over 100,000 surgeries, 800,000 imaging tests and hundreds of thousands of other health diagnostics and treatments each year. These create huge amounts of de-identified data that Assuta is securely sharing with more than 20 startups through its innovation arm, RISE, launched last year working in collaboration with NVIDIA.

One of the startups, Aidoc, is helping Assuta alert imaging technicians with AI-based insights of possible bleeding in the brain and other critical conditions in a patient’s scan within minutes. Another, Rhino Health, is using federated learning powered by NVIDIA FLARE to make AI development on diverse medical datasets from hospitals across the globe more accessible to Assuta’s collaborators.

Both companies are members of NVIDIA Inception, a global program designed to support cutting-edge startups with go-to-market support, expertise and technology.

“We’re building a hub to serve innovators with the infrastructure they need to develop, test and deploy new AI technology for image analysis and other data-heavy computations in radiology, pathology, genomics and more,” said Daniel Rabina, director of innovation at RISE. “We want to make collaboration with companies, research institutes, hospitals and universities possible while maintaining patient data privacy.”

To support AI development, testing and deployment, Assuta has installed NVIDIA DGX A100 systems on premises and adopted the NVIDIA Clara Holoscan platform, plus software libraries including MONAI for healthcare imaging and NVIDIA FLARE for federated learning.

NVIDIA and RISE are collaborating on RISE with US, a program built to introduce selected Israeli entrepreneurs and early-stage startups working on digital and computational health solutions to the U.S. market. Applications to join the program are open until August 28.

Aidoc Flags Urgent Cases for Radiologist Review

Aidoc, which is New York-based with a research branch in Israel, has developed FDA-cleared AI solutions to flag acute conditions including brain hemorrhages, pulmonary embolisms and strokes from imaging scans.

Aidoc desktop and mobile interfaceFounded in 2016 by a group of veterans from the Israel Defense Forces, the startup has deployed its AI to analyze millions of cases across more than 1,000 medical facilities, primarily in the U.S., Europe and Israel.

Its algorithms integrate seamlessly with the PACS imaging workflow used by radiologists worldwide, working behind the scenes to analyze each imaging study and flag urgent findings — bringing potentially critical cases to the radiologist’s attention for review.

Aidoc’s tools can help address the growing shortage of radiologists globally by reducing the time a radiologist needs to spend on each case, enabling care for more patients. And by pushing potentially critical cases to the top of a radiologist’s pile, the AI can help clinicians catch important findings sooner, improving patient outcomes.

The startup uses NVIDIA Tensor Core GPUs in the cloud through AWS for AI training and inference. Adopting NVIDIA GPUs helped reduce model training time from days to a couple hours.

Immediate Impact at Assuta Medical Centers 

Assuta is a private chain of hospitals that provides elective care — typically dealing with routine screenings rather than emergency room patients — but it adopted Aidoc’s solution to help imaging technicians spot critical cases that need urgent attention among its roughly 200,000 CT tests conducted annually.

When a radiology scan isn’t urgent, it may take a couple days for a doctor to review the case. Aidoc can shrink this time to minutes by identifying concerning cases as soon as the scans are captured by radiology staff. Assuta facilities

At Assuta, urgent findings are typically found among cancer patients, or people who have recently undergone surgery and need follow-up scans. The healthcare organization is using Aidoc’s AI tools to detect intracranial hemorrhages and two kinds of pulmonary embolism.

“We saw the impact right away,” said Dr. Michal Guindy, head of medical imaging and head of RISE at Assuta. “Just a couple days after installing Aidoc at Assuta, a patient came in for a follow-up scan after a brain procedure and had an intracranial hemorrhage. Because Aidoc alerted the imaging technician to flag it for further review, our doctors were able to call the patient while they were on their way home and immediately redirect them to the hospital for treatment.”

Rhino Health Fosters Collaboration With Federated Learning

In addition to deploying AI models in full-scale, real-world settings, Assuta is supporting innovators who are developing, testing or validating new medical AI solutions by sharing the healthcare organization’s data, while also using federated learning through Rhino Health.

Assuta has millions of radiology cases digitized — a desirable resource for researchers and startups looking for robust, diverse datasets to train or validate their AI models. But because of data privacy protection, it’s important that patient information stays safely within the firewall of medical centers like Assuta.

“Data diversity is necessary to develop AI models meant for the use of medical teams. Without optimal computing resources, it would be extremely difficult to use our data and make the magic happen,” said Rabina. “That’s why we need federated learning enabled by both NVIDIA and Rhino Health.”

Federated learning allows companies, healthcare institutions and universities to work together by training and validating AI models across multiple organizations’ datasets while maintaining each organization’s data privacy. Rhino Health provides a neutral platform — available through the NVIDIA AI Enterprise software suite — that enables secure collaboration, powered by NVIDIA A100 GPUs in the cloud and the NVIDIA FLARE federated learning framework.

With Rhino Health, Assuta aims to help its collaborators develop AI models across hospitals internationally, resulting in more generalizable algorithms that perform more accurately across different patient populations.

Register for NVIDIA GTC, running online Sept. 19-22, to hear more from leaders in healthcare AI.

Subscribe to NVIDIA healthcare news and watch on demand as Assuta, Aidoc and Rhino Health speak at an GTC panel.

The post Top Israel Medical Center Partners with AI Startups to Help Detect Brain Bleeds, Other Critical Cases appeared first on NVIDIA Blog.

Read More

GFN Thursday Brings Thunder to the Cloud With ‘Rumbleverse’ Arriving on GeForce NOW

It’s time to rumble in Grapital City with Rumbleverse launching today on GeForce NOW.

Punch your way into the all-new, free-to-play Brawler Royale from Iron Galaxy Studios and Epic Games Publishing, streaming from the cloud to nearly all devices.

That means gamers can tackle, uppercut, body slam and more from any GeForce NOW-compatible device, including mobile, at full PC quality. And GeForce NOW is the only way for Mac gamers to join the fray.

Plus, jump over to the list of seven new titles in the GeForce NOW library. Members will also notice a new “GET” button that makes accessing titles they’re interested in more seamless, so they can get right into gaming.

Drop in, Throw Down!

Drop into the chaotic world of Grapital City, where players must brawl it out to become the champion. Rumblers can create their own fighters using hundreds of unique items to stand out in the crowd — a 40-person melee crowd, to be exact.

Or, maybe, the strategy isn’t to stand out. With a massive city to run around in — including skyscrapers and an urban landscape — there are plenty of places to hide, duke it out and find crates full of weapons, like baseball bats or a stop sign, as well as upgrades to level up with.

Players can explore a ton of moves to take down other rumblers and discover perks with each round to come up with devious new ways to be the last person standing.

To learn the ways of the Rumble, Playground mode is available to explore Grapital City, in addition to various training modules scattered across the map. Players can also form a tag team and fight back to back with a friend.

Rumbleverse is free to play, so getting started is easy when paired with a free GeForce NOW membership. Thanks to the cloud, members don’t even have to wait for the game to download.

Level up to a RTX 3080 membership to stream at up to 1440p and 120 frames per second at ultra-low latency, plus dedicated access to RTX 3080 servers and eight-hour gaming sessions. It’s the best way to get the upper hand when duking it out with fellow rumblers.

Smash the ‘GET’ Button

The GeForce NOW apps on PC, Mac, iOS and browser now feature a “GET” button to link members directly to the digital store of their choice, making it even easier to quickly purchase titles or access free-to-play ones. And without having to wait for game downloads due to cloud streaming, members can dive into their new games as quickly as possible.

GeForce NOW GET Button
Get what you want, when you want it.

Hunted, the newest season of Apex Legends, is also available for members to stream. Hunt or be hunted in the cloud with a new Legend Vantage, an updated map of Kings Canyon, increased level cap and more in Apex Legends: Hunted.

Apex Legends Season 14 on GeForce NOW
Hunt or be hunted in the cloud in the newest season of “Apex Legends”

Plus, make sure to check out the seven new titles being added this week:

How do you plan to be a champion this week? We’ve got a couple of options for you to choose from. Let us know your answer on Twitter or in the comments below.

The post GFN Thursday Brings Thunder to the Cloud With ‘Rumbleverse’ Arriving on GeForce NOW appeared first on NVIDIA Blog.

Read More

Design in the Age of Digital Twins: A Conversation With Graphics Pioneer Donald Greenberg

Asked about the future of design, Donald Greenberg holds up a model of a human aorta.

“After my son became an intravascular heart surgeon at the Cleveland Clinic, he hired one of my students to use CAT scans and create digital 3D models of an aortic aneurysm,” said the computer graphics pioneer in a video interview from his office at Cornell University.

The models enabled custom stents that fit so well patients could leave the hospital soon after they’re inserted. It’s one example Greenberg gives of how computer graphics are becoming part of every human enterprise.

A Whole New Chapter

Expanding the frontier, he’s creating new tools for an architecture design course based on today’s capabilities for building realistic 3D worlds and digital twins. It will define a holistic process so everyone from engineers to city planners can participate in a design.

The courseware is still at the concept stage, but his passion for it is palpable. “This is my next big project, and I’m very excited about it,” said the computer graphics professor of the work, which is sponsored by NVIDIA.

“NVIDIA is superb at the hardware and the software algorithms, and for a long time its biggest advantage is in how it fits them together,” he said.

Greenberg imagines a design process open enough to include urban planners concerned with affordable housing, environmental activists mindful of sustainable living and neighbors who want to know the impact a new structure might have on their access to sunlight.

“I want to put people from different disciplines in the same foxhole so they can see things from different points of view at the same time,” said Greenberg, whose courses have spanned Cornell’s architecture, art, computer science, engineering and business departments.

Teaching With Omniverse

A multidisciplinary approach has fueled Greenberg’s work since 1968, when he started teaching at both Cornell’s colleges of engineering and architecture. And he’s always been rooted in the latest technology.

Today, that means inspiring designers and construction experts to enter the virtual worlds built with photorealistic graphics, simulations and AI in NVIDIA Omniverse.

“Omniverse expands, to multiple domains, the work done with Universal Scene Description, developed by some of the brightest graphics people at places like Pixar — it’s a superb environment for modern collaboration,” he said.

It’s a capability that couldn’t have existed without the million-X advances in computing Greenberg has witnessed in his 54-year career.

He recalls his excitement in 1979 when he bought a VAX-11/780 minicomputer, his first system capable of a million instructions per second. In one of his many SIGGRAPH talks, he said designers would someday have personal workstations capable of 100 MIPS.

Seeing Million-X Advances

The prediction proved almost embarrassingly conservative.

“Now I have a machine that’s 1012 times more powerful than my first computer — I feel like a surfer riding a tidal wave, and that’s one reason I’m still teaching,” he said.

Don Greenberg with students
Greenberg with some of his students trying out the latest design tools.

It’s a long way from the system at General Electric’s Visual Simulation Laboratory in Syracuse, New York, where in the late 1960s he programmed on punch cards to help create one of the first videos generated solely with computer graphics. The 18-minute animation wowed audiences and took him and 14 of his architecture students two years to create.

NASA used the same GE system to train astronauts how to dock the Apollo module with the lunar lander. And the space agency was one of the early adopters of digital twins, he notes, a fact that saved the lives of the Apollo 13 crew after a system malfunction two days into their trip to the moon.

From Sketches to Digital Twins

For Greenberg, it all comes down to the power of computer graphics.

“I love to draw, 99% of intellectual intake comes through our eyes and my recent projects are about how to go from a sketch or idea to a digital twin,” he said.

Among his few regrets, he said he’ll miss attending SIGGRAPH in person this year.

“It became an academic home for my closest friends and collaborators, a community of mavericks and the only place I found creative people with both huge imaginations and great technical skills, but it’s hard to travel at my age,” said the 88-year-old, whose pioneering continues in front of his computer screen.

“I have a whole bunch of stuff I’m working on that I call techniques in search of a problem, like trying to model how the retina sees an image — I’m just getting started on that one,” he said.

—————————————————————————————————————————————————

Learn More About Omniverse at SIGGRAPH

Anyone can get started working on digital twins with Omniverse by taking a free, self-paced online course at the NVIDIA Deep Learning Institute. And individuals can download Omniverse free.

Educators can request early access to the “Graphics & Omniverse” teaching kit. SIGGRAPH attendees can join a session on “The Metaverse for International Educators” or one of four hands-on training labs on Omniverse.

To learn more, watch NVIDIA’s CEO Jensen Huang and others in a special address at SIGGRAPH on-demand.

The post Design in the Age of Digital Twins: A Conversation With Graphics Pioneer Donald Greenberg appeared first on NVIDIA Blog.

Read More

AI Flying Off the Shelves: Restocking Robot Rolls Out to Hundreds of Japanese Convenience Stores

Tokyo-based startup Telexistence this week announced it will deploy NVIDIA AI-powered robots to restock shelves at hundreds of FamilyMart convenience stores in Japan.

There are 56,000 convenience stores in Japan — the third-highest density worldwide. Around 16,000 of them are run by FamilyMart. Telexistence aims to save time for these stores by offloading repetitive tasks like refilling shelves of beverages to a robot, allowing retail staff to tackle more complex tasks like interacting with customers.

It’s just one example of what can be done by Telexistence’s robots, which run on the NVIDIA Jetson edge AI and robotics platform. The company is also developing AI-based systems for warehouse logistics with robots that sort and pick packages.

“We want to deploy robots to industries that support humans’ everyday life,” said Jin Tomioka, CEO of Telexistence. “The first space we’re tackling this is through convenience stores — a huge network that supports daily life, especially in Japan, but is facing a labor shortage.”

The company, founded in 2017, next plans to expand to convenience stores in the U.S., which is also plagued with a labor shortage in the retail industry — and where more than half of consumers say they visit one of the country’s 150,000 convenience stores at least once a month.

Telexistence Robots Stock Up at FamilyMart

Telexistence will begin deploying its restocking robots, called TX SCARA, to 300 FamilyMart stores in August — and aims to bring the autonomous machines to additional FamilyMart locations, as well as other major convenience store chains, in the coming years.

“Staff members spend a lot of time in the back room of the store, restocking shelves, instead of out with customers,” said Tomioka. “Robotics-as-a-service can allow staff to spend more time with customers.”

TX SCARA runs on a track and includes multiple cameras to scan each shelf, using AI to  identify drinks that are running low and plan a path to restock them. The AI system can successfully restock beverages automatically more than 98% of the time.

In the rare cases that the robot misjudges the placement of the beverage or a drink topples over, there’s no need for the retail staff to drop their task to get the robot back up and running. Instead, Telexistence has remote operators on standby, who can quickly address the situation by taking manual control through a VR system that uses NVIDIA GPUs for video streaming.

Telexistence estimates that a busy convenience store needs to restock more than 1,000 beverages a day. TX SCARA’s cloud system maintains a database of product sales based on the name, date, time and number of items stocked by the robots during operation. This allows the AI to prioritize which items to restock first based on past sales data.

Telexistence robot restocks beverages at a Family Mart store

Achieving Edge AI With NVIDIA Jetson 

TX SCARA has multiple AI models under the hood. An object-detection model identifies the types of drinks in a store to determine which one belongs on which shelf. It’s combined with another model that helps detect the movement of the robot’s arm, so it can pick up a drink and accurately place it on the shelf between other products. A third is for anomaly detection: recognizing if a drink has fallen over or off the shelf. One more detects which drinks are running low in each display area.

The Telexistence team used custom pre-trained neural networks as their base models, adding synthetic and annotated real-world data to fine-tune the neural networks for their application. Using a simulation environment to create more than 80,000 synthetic images helped the team augment their dataset so the robot could learn to detect drinks in any color, texture or lighting environment.

For AI model training, the team relied on an NVIDIA DGX Station. The robot itself uses two NVIDIA Jetson embedded modules: the NVIDIA Jetson AGX Xavier for AI processing at the edge, and the NVIDIA Jetson TX2 module to transmit video streaming data.

On the software side, the team uses the NVIDIA JetPack SDK for edge AI and the NVIDIA TensorRT SDK for high-performance inference.

“Without TensorRT, our models wouldn’t run fast enough to detect objects in the store efficiently,” said Pavel Savkin, chief robotics automation officer at Telexistence.

Telexistence further optimized its AI models using half-precision (FP16) instead of single-precision floating-point format (FP32).

Learn more about the latest in AI and robotics at NVIDIA GTC, running online Sept. 19-22. Registration is free.

The post AI Flying Off the Shelves: Restocking Robot Rolls Out to Hundreds of Japanese Convenience Stores appeared first on NVIDIA Blog.

Read More

Future of Creativity on Display ‘In the NVIDIA Studio’ During SIGGRAPH Special Address

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

A glimpse into the future of AI-infused virtual worlds was on display at SIGGRAPH — the world’s largest gathering of computer graphics experts — as NVIDIA founder and CEO Jensen Huang put the finishing touches on the company’s special address.

Announcements included a host of updates to a pillar of the NVIDIA Studio software suite: NVIDIA Omniverse, a platform for 3D design collaboration and world simulation. New features and improvements to apps including Create, Machinima, Audio2Face and Nucleus will help 3D artists build virtual worlds, digital twins and avatars for the metaverse.

Each month, NVIDIA Studio Driver releases provide artists, creators and 3D developers with the best performance and reliability when working with creative applications. Available now, the August NVIDIA Studio Driver gives creators peak reliability for using Omniverse and their favorite creative apps.

Plus, this week’s featured In the NVIDIA Studio artist, Simon Lavit, exhibits his mastery of Omniverse as the winner of the #MadeInMachinima contest. The 3D artist showcases the creative workflow for his victorious short film, Painting the Astronaut.

Omniverse Expands

NVIDIA Omniverse — an open platform based on Universal Scene Description (USD) for building and connecting virtual worlds — just received a significant upgrade.

Omniverse Apps — including Create 2022.2 — received a major PhysX update with soft-body simulation, particle-cloth simulation and soft-contact models, delivering more realism to physically accurate virtual worlds. Added OmniLive workflows enable artists more freedom through a new collaboration interface for non-destructive USD workflows.

Omniverse users can now add animations and emotions with the Audio2Face app.

Audio2Face 2022.1 is now available in beta, including major updates that enable AI-powered emotion control and full facial animation, delivering more realism than ever. Users can now direct emotion over time, as well as mix key emotions like joy, amazement, anger and sadness. The AI can also direct eye, teeth and tongue motion, in addition to the avatar’s skin, providing an even more complete facial-animation solution.

Learn additional details on these updates and more.

Winning the #MadeInMachinima Contest

Since he first held a pen, Simon Lavit has been an artist. Now, Lavit adds Omniverse Machinima to the list of creative tools he’s mastered, as the winner of the #MadeInMachinima contest.

His entry, Painting the Astronaut, was selected by an esteemed panel of judges that included numerous creative experts.

Powered by a GeForce RTX 3090 GPU, Lavit’s creative workflow showcases the breadth and interoperability of Omniverse, its Apps and Connectors. He used lighting and scene setting to establish the short film’s changing mood, helping audiences understand the story’s progression. Its introduction, for example, is bright and clear. The film then gets darker, conveying the idea of the unknown as the character starts his journey.

The lighting for “Painting the Astronaut” helps guide the story, with 3D assets from the Omniverse library.

Lavit storyboarded on paper before starting his digital process with the Machinima and Omniverse Create apps. He quickly turned to NVIDIA’s built-in 3D asset library, filled with free content from Mount & Blade II: Bannerlord, Mechwarrior 5: Mercenaries, Squad and more – to populate the scene.

The 3D model for the spaceship was created in Autodesk Maya within Omniverse.

Then, Lavit used Autodesk Maya to create 3D models for some of his hero assets — like the protagonist Sol’s spaceship. The Maya Omniverse Connector allowed him to visualize scenes within Omniverse Create. He also benefited from RTX-accelerated ray tracing and AI denoising in Maya, resulting in highly interactive and photorealistic renders.

Next, Lavit textured the models in Adobe Substance 3D, which also has an Omniverse Connector. Substance 3D uses NVIDIA Iray rendering, including for textures and substances. It also features RTX-accelerated light- and ambient-occlusion baking, which optimizes assets in seconds.

Lavit then returned to Machinima for final layout, animation and render. The result was composited using Adobe After Effects, with an extra layer of effects and music. What turned into the contest-winning piece of art ultimately was “a pretty simple workflow to keep the complexity to a minimum,” Lavit said.

”Painting the Astronaut” netted Lavit a GeForce RTX 3080 Ti-powered ASUS ProArt StudioBook 16.

To power his future creativity from anywhere, Lavit won an ASUS ProArt StudioBook 16. This NVIDIA Studio laptop packs top-of-the-line technology into a device that enables users to work on the go with world-class power from a GeForce RTX 3080 Ti Laptop GPU and beautiful 4K display.

3D Artist and Omniverse #MadeInMachinima contest winner Simon Lavit.

Lavit, born in France and now based in the U.S., sees every project as an adventure. Living in a different country from where he was born changed his vision of art, he said. Lavit regularly finds inspiration from the French graphic novel series, The Incal, which is written by Alejandro Jodorowsky and illustrated by renowned cartoonist Jean Giraud, aka Mœbius.

Made the Grade

The next generation of creative professionals is heading back to campus. Choosing the right NVIDIA Studio laptop can be tricky, but students can use this guide to find the perfect tool to power their creativity — like the Lenovo Slim 7i Pro X, an NVIDIA Studio laptop now available with a GeForce RTX 3050 Laptop GPU.

While the #MadeInMachinima contest has wrapped, creators can graduate to an NVIDIA RTX A6000 GPU in the #ExtendOmniverse contest, running through Friday, Aug. 19. Perform something akin to magic by making your own NVIDIA Omniverse Extension for a chance to win an RTX A6000 or GeForce RTX 3090 Ti GPU. Winners will be announced in September at GTC.

Follow NVIDIA Omniverse on Instagram, Medium, Twitter and YouTube for additional resources and inspiration. Check out the Omniverse forums, and join our Discord server and Twitch channel to chat with the community.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Future of Creativity on Display ‘In the NVIDIA Studio’ During SIGGRAPH Special Address appeared first on NVIDIA Blog.

Read More

At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution

In a swift, eye-popping special address at SIGGRAPH, NVIDIA execs described the forces driving the next era in graphics, and the company’s expanding range of tools to accelerate them.

“The combination of AI and computer graphics will power the metaverse, the next evolution of the internet,” said Jensen Huang, founder and CEO of NVIDIA, kicking off the 45-minute talk.

It will be home to connected virtual worlds and digital twins, a place for real work as well as play. And, Huang said, it will be vibrant with what will become one of the most popular forms of robots: digital human avatars.

With 45 demos and slides, five NVIDIA speakers announced:

  • A new platform for creating avatars, NVIDIA Omniverse Avatar Cloud Engine (ACE).
  • Plans to build out Universal Scene Description (USD), the language of the metaverse.
  • Major extensions to NVIDIA Omniverse, the computing platform for creating virtual worlds and digital twins.
  • Tools to supercharge graphics workflows with machine learning.

“The announcements we made today further advance the metaverse, a new computing platform with new programming models, new architectures and new standards,” he said.

Metaverse applications are already here.

Huang pointed to consumers trying out virtual 3D products with augmented reality, telcos creating digital twins of their radio networks to optimize and deploy radio towers and companies creating digital twins of warehouses and factories to optimize their layout and logistics.

Enter the Avatars

The metaverse will come alive with virtual assistants, avatars we interact with as naturally as talking to another person. They’ll work in digital factories, play in online games and provide customer service for e-tailers.

“There will be billions of avatars,” said Huang, calling them “one of the most widely used kinds of robots” that will be designed, trained and operated in Omniverse.

Digital humans and avatars require natural language processing, computer vision, complex facial and body animations and more. To move and speak in realistic ways, this suite of complex technologies must be synced to the millisecond.

It’s hard work that NVIDIA aims to simplify and accelerate with Omniverse Avatar Cloud Engine. ACE is a collection of AI models and services that build on NVIDIA’s work spanning everything from conversational AI to animation tools like Audio2Face and Audio2Emotion.

MetaHuman in Unreal Engine image courtesy of Epic Games.

With Omniverse ACE, developers can build, configure and deploy their avatar application across any engine in any public or private cloud,” said Simon Yuen, a senior director of graphics and AI at NVIDIA. “We want to democratize building interactive avatars for every platform.”

ACE will be available early next year, running on embedded systems and all major cloud services.

Yuen also demonstrated the latest version of Omniverse Audio2Face, an AI model that can create facial animation directly from voices.

“We just added more features to analyze and automatically transfer your emotions to your avatar,” he said.

Future versions of Audio2Face will create avatars from a single photo, applying textures automatically and generating animation-ready 3D meshes. They’ll sport high-fidelity simulations of muscle movements an AI can learn from watching a video — even lifelike hair that responds as expected to virtual grooming.

USD, a Foundation for the 3D Internet

Many superpowers of the metaverse will be grounded in USD, a foundation for the 3D internet.

The metaverse “needs a standard way of describing all things within 3D worlds,” said Rev Lebaredian, vice president of Omniverse and simulation technology at NVIDIA.

“We believe Universal Scene Description, invented and open sourced by Pixar, is the standard scene description for the next era of the internet,” he added, comparing USD to HTML in the 2D web.

Lebaredian described NVIDIA’s vision for USD as a key to opening even more opportunities than those in the physical world.

“Our next milestones aim to make USD performant for real-time, large-scale virtual worlds and industrial digital twins,” he said, noting NVIDIA’s plans to help build out support in USD for international character sets, geospatial coordinates and real-time streaming of IoT data.

NVIDIA's planned investments in USD
Examples of NVIDIA’s planned investments in USD

To further accelerate USD adoption, NVIDIA will release a compatibility testing and certification suite for USD. It lets developers know their custom USD components produce an expected result.

In addition, NVIDIA announced a set of simulation-ready USD assets, designed for use in industrial digital twins and AI training workflows. They join a wealth of USD resources available online for free including USD-ready scenes, on-demand tutorials, documentation and instructor-led courses.

“We want everyone to help build and advance USD,” said Lebaredian.

Omniverse Expands Its Palette

One of the biggest announcements of the special address was a major new release of NVIDIA Omniverse, a platform that’s been downloaded nearly 200,000 times.

Huang called Omniverse “a USD platform, a toolkit for building metaverse applications, and a compute engine to run virtual worlds.”

The latest version packs several upgraded core technologies and more connections to popular tools.

The links, called Omniverse Connectors, are now in development for Unity, Blender, Autodesk Alias, Siemens JT, SimScale, the Open Geospatial Consortium and more. Connectors are now available in beta for PTC Creo, Visual Components and SideFX Houdini. These new developments join Siemens Xcelerator, now part of the Omniverse network, welcoming more industrial customers into the era of digital twins.

Like the internet itself, Omniverse is “a network of networks,” connecting users across industries and disciplines, said Steve Parker, NVIDIA’s vice president of professional graphics.

New features in NVIDIA Omniverse
Examples of new features in NVIDIA Omniverse.

Nearly a dozen leading companies will showcase Omniverse capabilities at SIGGRAPH, including hardware, software and cloud-service vendors ranging from AWS and Adobe to Dell, Epic and Microsoft. A half dozen companies will conduct NVIDIA-powered sessions on topics such as AI and virtual worlds.

Speeding Physics, Animating Animals

Parker detailed several technology upgrades in Omniverse. They span enhancements for simulating physically accurate materials with the Material Definition Language (MDL), real-time physics with PhysX and the hybrid rendering and AI system, RTX.

“These core technology pillars are powered by NVIDIA high performance computing from the edge to the cloud,” Parker said.

For example, PhysX now supports soft-body and particle-cloth simulation, bringing more physical accuracy to virtual worlds in real time. And NVIDIA is fully open sourcing MDL so it can readily support graphics API standards like OpenGL or Vulkan, making the materials standard more broadly available to developers.

Omniverse also will include neural graphics capabilities developed by NVIDIA Research that combine RTX graphics and AI. For example:

  • Animal Modelers let artists iterate on an animal’s form with point clouds, then automatically generate a 3D mesh.
  • GauGAN360, the next evolution of NVIDIA GauGAN, generates 8K, 360-degree panoramas that can easily be loaded into an Omniverse scene.
  • Instant NeRF creates 3D objects and scenes from 2D images.

An Omniverse Extension for NVIDIA Modulus, a machine learning framework, will let developers use AI to speed simulations of real-world physics up to 100,000x, so the metaverse looks and feels like the physical world.

In addition, Omniverse Machinima — subject of a lively contest at SIGGRAPH — now sports content from Post Scriptum, Beyond the Wire and Shadow Warrior 3 as well as new AI animation tools like Audio2Gesture.

A demo from Industrial Light & Magic showed another new feature. Omniverse DeepSearch uses AI to help teams intuitively search through massive databases of untagged assets, bringing up accurate results for terms even when they’re not specifically listed in metadata.

Graphics Get Smart

One of the essential pillars of the emerging metaverse is neural graphics. It’s a hybrid discipline that harnesses neural network models to accelerate and enhance computer graphics.

“Neural graphics intertwines AI and graphics, paving the way for a future graphics pipeline that is amenable to learning from data,” said Sanja Fidler, vice president of AI at NVIDIA. “Neural graphics will redefine how virtual worlds are created, simulated and experienced by users,” she added.

AI will help artists spawn the massive amount of 3D content needed to create the metaverse. For example, they can use neural graphics to capture objects and behaviors in the physical world quickly.

Fidler described NVIDIA software to do just that, Instant NeRF, a tool to create a 3D object or scene from 2D images. It’s the subject of one of NVIDIA’s two best paper awards at SIGGRAPH.

In the other best paper award, neural graphics powers a model that can predict and reduce reaction latencies in esports and AR/VR applications. The two best papers are among 16 total that NVIDIA researchers are presenting this week at SIGGRAPH.

neural graphics
Neural graphics blends AI into the graphics pipeline.

Designers and researchers can apply neural graphics and other techniques to create their own award-winning work using new software development kits NVIDIA unveiled at the event.

Fidler described one of them, Kaolin Wisp, a suite of tools to create neural fields — AI models that represent a 3D scene or object — with just a few lines of code.

Separately, NVIDIA announced NeuralVDB, the next evolution of the open-sourced standard OpenVDB that industries from visual effects to scientific computing use to simulate and render water, fire, smoke and clouds.

NeuralVDB uses neural models and GPU optimization to dramatically reduce memory requirements so users can interact with extremely large and complex datasets in real time and share them more efficiently.

“AI, the most powerful technology force of our time, will revolutionize every field of computer science, including computer graphics, and NVIDIA RTX is the engine of neural graphics,” Huang said.

Watch the full special address at NVIDIA’s SIGGRAPH event site. That’s where you’ll also find details of labs, presentations and the debut of a behind-the-scenes documentary on how we created our latest GTC keynote.

The post At SIGGRAPH, NVIDIA CEO Jensen Huang Illuminates Three Forces Sparking Graphics Revolution appeared first on NVIDIA Blog.

Read More