Grand Entrance: Human Horizons Unveils Smart GT Built on NVIDIA DRIVE Orin

Tourer vehicles just became a little more grand.

Electric vehicle maker Human Horizons provided a detailed glimpse earlier this month of its latest production model: the GT HiPhi Z. The intelligent EV is poised to redefine the grand tourer vehicle category with innovative, software-defined capabilities that bring luxurious cruising to the next level.

The vehicle’s marquis features include an in-vehicle AI assistant and autonomous driving system powered by NVIDIA DRIVE Orin.

The GT badge first appeared on vehicles in the mid-20th century, combining smooth performance with a roomy interior for longer joy rides. Since then, the segment has diversified, with varied takes on horsepower and body design.

The HiPhi Z further iterates on the vehicle type, emphasizing smart performance and a convenient, comfortable in-cabin experience.

Smooth Sailing

An EV designed to be driven, the GT HiPhi Z also incorporates robust advanced driver assistance features that can give humans a break on longer trips.

The HiPhi Pilot ADAS platform provides dual redundancy for computing, perception, communication, braking, steering and power supply. It uses the high-performance AI compute of NVIDIA DRIVE Orin and 34 sensors to perform assisted driving and parking, as well as smart summon.

DRIVE Orin is designed to handle the large number of applications and deep neural networks running simultaneously for autonomous driving capabilities. It’s architected to achieve systematic safety standards such as the ISO 26262 ASIL-D.

With this high level of performance at its core, the HiPhi Pilot system delivers seamless automated features that remove the stress from driving.

Intelligent Interior

Staying true to its GT DNA, the HiPhi Z sports a luxurious interior that delivers effortless comfort for both the driver and passengers.

The cabin includes suede bucket seats, ambient panel lights and a 23-speaker audio system for an immersive sensory environment.

It’s also intelligent, with the HiPhi Bot AI companion that can automatically adjust aspects of the vehicle experience. The AI assistant uses a vehicle-grade, adjustable, high-speed motion robotic arm to interact with passengers. It can move back and forth in less than a second, with control accuracy of up to 0.001 millimeters, performing a variety of delicate movements seamlessly.

The GT HiPhi Z is currently on display in Shenzen, China, and will tour nearly a dozen other cities. Human Horizons plans to release details of the full launch at the Chengdu Auto Show in August.

The post Grand Entrance: Human Horizons Unveils Smart GT Built on NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

Merge Ahead: Researcher Takes Software Bridge to Quantum Computing

Kristel Michielsen was into quantum computing before quantum computing was cool.

The computational physicist simulated quantum computers as part of her Ph.D. work in the Netherlands in the early 1990s.

Today, she manages one of Europe’s largest facilities for quantum computing, the Jülich Unified Infrastructure for Quantum Computing (JUNIQ) . Her mission is to help developers pioneer this new realm with tools like NVIDIA Quantum Optimized Device Architecture (QODA).

“This helps bring quantum computing closer to the HPC and AI communities.” -Kristel Michielsen

“We can’t go on with today’s classical computers alone because they consume so much energy, and they can’t solve some problems,” said Michielsen, who leads the quantum program at the Jülich Supercomputing Center near Cologne. “But paired with quantum computers that won’t consume as much energy, I believe there may be the potential to solve some of our most complex problems.”

Enter the QPU

Because quantum processors, or QPUs, harness the properties of quantum mechanics, they’re ideally suited to simulating processes at the atomic level. That could enable fundamental advances in chemistry and materials science, starting domino effects in everything from more efficient batteries to more effective drugs.

QPUs may also help with thorny optimization problems in fields like logistics. For example, airlines face daily challenges figuring out which planes to assign to which routes.

In one experiment, a quantum computer recently installed at Jülich showed the most efficient way to route nearly 500 flights — demonstrating the technology’s potential.

Quantum computing also promises to take AI to the next level. In separate experiments, Jülich researchers used quantum machine learning to simulate how proteins bind to DNA strands and classify satellite images of Lyon, France.

Hybrids Take Best of Both Worlds

Several prototype quantum computers are now available, but none is powerful or dependable enough to tackle commercially relevant jobs yet. But researchers see a way forward.

“For a long time, we’ve had a vision of hybrid systems as the only way to get practical quantum computing — linked to today’s classical HPC systems, quantum computers will give us the best of both worlds,” Michielsen said.

And that’s just what Jülich and other researchers around the world are building today.

Quantum Gets 49x Boost on A100 GPUs

In addition to its current analog quantum system, Jülich plans next year to install a neutral atom quantum computer from Paris-based Pasqal. It’s also been running quantum simulations on classical systems such as its JUWELS Booster, which packs over 3,700 NVIDIA A100 Tensor Core GPUs.

“The GPU version of our universal quantum-computer simulator, called JUQCS, has given us up to 49x speedups compared to jobs running on CPU clusters — this work uses almost all the system’s GPU nodes and relies heavily on its InfiniBand network,” she said, citing a recent paper.

Recently, classical systems like the JUWELS Booster use NVIDIA cuQuantum, a software development kit for accelerating quantum jobs on GPUs. “For us, it’s great for cross-platform benchmarking, and for others it could be a great tool to start or optimize their quantum simulation codes,” Michielsen said of the SDK.

Diagram of JUWELS Booster at Julich Supercomputing Center
A100 GPUs (green) form the core of the JUWELS Booster that can simulate quantum jobs with the NVIDIA cuQuantum SDK.

Hybrid Systems, Hybrid Software

With multiple HPC and quantum systems on hand and more on the way for Jülich and other research centers, one of the challenges is tying it all together.

“The HPC community needs to look in detail at applications that span everything from climate science and medicine to chemistry and physics to see what parts of the code can run on quantum systems,” she said.

It’s a Herculean task for developers entering the quantum computing era, but help’s on the way.

NVIDIA QODA acts like a software bridge. With a function call, developers can choose to run their quantum jobs on GPUs or quantum processors.

QODA’s high-level language will support every kind of quantum computer, and its compiler will be available as open-source software. And it’s supported by quantum system and software providers including Pasqal, Xanadu, QC Ware and Zapata.

Quantum Leap for HPC, AI Developers

Michielsen foresees JUNIQ providing QODA to researchers across Europe who use its quantum services.

“This helps bring quantum computing closer to the HPC and AI communities,” she said. “It will speed up how they get things done without them needing to do all the low-level programming, so it makes their life much easier.”

Michielsen expects many researchers will be using QODA to try out hybrid quantum-classical computers — over the coming year and beyond.

“Who knows, maybe one of our users will pioneer a new example of real-world hybrid computing,” she said.

Image at top courtesy of Forschungszentrum Jülich / Ralf-Uwe Limbach

The post Merge Ahead: Researcher Takes Software Bridge to Quantum Computing appeared first on NVIDIA Blog.

Read More

Sequences That Stun: Visual Effects Artist Surfaced Studio Arrives ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Visual effects savant Surfaced Studio steps In the NVIDIA Studio this week to share his clever film sequences, Fluid Simulation and Destruction, as well as his creative workflows.

 

These sequences feature quirky visual effects that Surfaced Studio is renowned for demonstrating on his YouTube channel.

 

Surfaced Studio’s successful tutorial style, dubbed “edutainment,” features his wonderfully light-hearted personality — providing vital techniques and creative insights which lead to fun, memorable learning for his subscribers.

“I’ve come to accept my nerdiness — how my lame humor litters my tutorials — and I’ve been incredibly fortunate that I ended up finding a community of like-minded people who are happy to learn alongside my own attempts of making art,” Surfaced Studio mused.

The Liquification Situation

To create the Fluid Simulation sequence, Surfaced Studio began in Adobe After Effects by combining two video clips: one of him pretending to be hit by a fluid wave, and another of his friend Jimmy running towards him. Jimmy magically disappears because of the masked layer that Surfaced Studio applied.

Video playback within Blender.

He then rendered the clip and imported it into Blender. This served as a reference to match the 3D scene geometry with the fluid simulation.

The fluid simulation built with Mantaflow collides with Surfaced Studio.

Surfaced Studio then selected the Mantaflow fluid feature, tweaking parameters to create the fluid simulation. For a beginner’s look at fluid simulation techniques, check out his tutorial, FLUID SIMULATIONS in Blender 2.9 with Mantaflow. This feature, accelerated by his GeForce RTX 2070 Laptop GPU, bakes simulations faster than with a CPU alone.

 

To capture a collision with accurate, realistic physics, Surfaced Studio set up rigid body objects, creating the physical geometry for the fluid to collide with. The Jimmy character was marked with the Use Flow property to emit the fluid at the exact moment of the collision.

Speed vectors unlocked the motion blur effect accelerated by Surfaced Studio’s GPU.

“It’s hard not to recommend NVIDIA GPUs for anyone wanting to explore the creative space, and I’ve been using them for well over a decade now,” said Surfaced Studio. 

Surfaced Studio also enabled speed vectors to implement motion blur effects directly on the fluid simulation, adding further realism to the short.

His entire 3D creative workflow in Blender was accelerated by the RTX 2070 Laptop GPU: the fluid simulation, motion blur effects, animations and mesh generation. Blender Cycles RTX-accelerated OptiX ray tracing unlocked quick interactive modeling in the viewport and lightning-fast final renders. Surfaced Studio said his GPU saved him countless hours to reinvest in his creativity.

Take note of the multiple layers needed to bring 3D animations to life.

Surfaced Studio reached the composite stage in After Effects, applying the GPU-accelerated Curves effect to the water, shaping and illuminating it to his liking.

He then used the Boris FX Mocha AE plugin to rotoscope Jimmy — or create sequences by tracing over live-action footage frame by frame — to animate the character. This can be a lengthy process, but the GPU-accelerated plugin completed the task in mere moments.

Color touchups were applied with the Hue/Saturation, Brightness and Color Balance features, which are also GPU accelerated.

Finally, Surfaced Studio used the GPU-accelerated NVENC encoder to rapidly export his final video files.

For a deeper dive into Surfaced Studio’s process, watch his tutorial: Add 3D Fluid Simulations to Videos w/ Blender & After Effects.

“A lot of the third-party plugins that I use regularly, including Boris FX Mocha Pro, Continuum, Sapphire, Video Copilot Element 3D and Red Giant, all benefit heavily from GPU acceleration,” the artist said.

His GeForce RTX 2070 Laptop GPU worked overtime with this project — but the Fluid Simulation sequence only scratches the surface(d) of the artist’s skills.

Fire in the Hole!

Surfaced Studio built the short sequence Destruction following a similar creative workflow to Fluid Simulation. 3D scenes in Blender complemented video footage composited in After Effects, with realistic physics applied.

Destruction in Blender for Absolute Beginners covers the basics of how to break objects in Blender, add realistic physics to objects, calculate physics weight for fragments, and animate entire scenes.

3D Destruction Effects in Blender & After Effects offers tips and tricks for further compositing in After Effects, placing 2D stock footage in 3D elements, final color grading and camera-shaking techniques.

“Edutainment” at its finest.

These tools set the foundation for aspiring 3D artists to build their own destructive scenes — and the “edutainment” is highly recommended viewing.

Visual effects artist Surfaced Studio has worked with Intel, GIGABYTE, Boris FX, FX Home (HitFilm) and Gudsen.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Sequences That Stun: Visual Effects Artist Surfaced Studio Arrives ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

AI on the Sky: Stunning New Images from the James Webb Space Telescope to be Analyzed by, Train, AI

The release by U.S. President Joe Biden Monday of the first full-color image from the James Webb Space Telescope is already astounding — and delighting — humans around the globe.

“We can see possibilities nobody has ever seen before, we can go places nobody has ever gone before,” Biden said during a White House press event. “These images are going to remind the world that America can do big things.”

But humans aren’t the only audience for these images. Data from what Biden described as the “miraculous telescope” are also being soaked up by a new generation of GPU-accelerated AI created at UC Santa Cruz.

And Morpheus, as the team at UC Santa Cruz has dubbed the AI, won’t just be helping humans make sense of what we’re seeing. It will also use images from the $10 billion space telescope to better understand what it’s looking for.

The image released Monday represents the deepest and sharpest infrared images of the distant universe to date. Dubbed “Webb’s First Deep Field,” the image of galaxy cluster SMACS 0723 is overflowing with detail.

Answering Questions

NASA reported that the thousands of galaxies — including the faintest objects ever observed in the infrared — have appeared in Webb’s view for the first time.

And Monday’s image represents just a tiny piece of what’s out there, with the image covering a patch of sky roughly the size of a grain of sand held at arm’s length by someone on the ground, explained NASA Administrator Bill Nelson.

The telescope’s iconic array of 18 interlocking hexagonal mirrors, which span a total of 21 feet 4 inches, are peering far deeper into the universe and deeper into the universe’s past than any tool to date.

“When you look at something as big as this we are going to be able to answer questions that we don’t even know what the questions are yet,” Nelson said.

Strange New Worlds

U.S. President Joe Biden unveiled the first image from the $10 billion James Webb Space Telescope Monday. It shows galaxy cluster SMACS 0723 as it appeared 4.6 billion years ago. The combined mass of this galaxy cluster acts as a gravitational lens, magnifying much more distant galaxies behind it.

The telescope won’t just see back further in time than any scientific instrument — almost to the beginning of the universe — it may also help us see if planets outside our solar system are habitable, Nelson said.

Morpheus — which played a key role in helping scientists understand images taken on NASA’s Hubble Space Telescope — will help scientists ask, and answer, these questions, by analyzing images that are further away and from phenomena that are deeper back in time than before.

Working with Ryan Hausen, a Ph.D. student in UC Santa Cruz’s computer science department, Robertson helped create a deep learning framework that classifies astronomical objects, such as galaxies, based on the raw data streaming out of telescopes on a pixel-by-pixel basis.

“The JWST will really enable us to see the universe in a new way that we’ve never seen before,” said UC Santa Cruz Astronomy and Astrophysics Professor Brant Robertson. “So it’s really exciting.”

Eventually, Morpheus will also be using the images to learn, too. Not only are the JWST’s optics unique, but JWST will also be collecting light galaxies that are further away — and thus redder — than were visible on the Hubble.

Morpheus is trained on UC Santa Cruz’s Lux supercomputer. The machine includes 28 GPU nodes with two NVIDIA V100 GPUs each.

In other words, while we’ll all feasting our eyes on these images for years to come, scientists will be feeding data from the JWST to AI.

Tune in: NASA and its partners will release the full series of Webb’s first full-color images and data, known as spectra, Tuesday, July 12, during a live NASA TV broadcast.

The post AI on the Sky: Stunning New Images from the James Webb Space Telescope to be Analyzed by, Train, AI appeared first on NVIDIA Blog.

Read More

Windfall: Omniverse Accelerates Turning Wind Power Into Clean Hydrogen Fuel

Engineers are using the NVIDIA Omniverse 3D simulation platform as part of a proof of concept that promises to become a model for putting green energy to work around the world.

Dubbed Gigastack, the pilot project — led by a consortium that includes Phillips 66 and Denmark-based renewable energy company Ørsted — will create low-emission fuel for the energy company’s Humber refinery in England.

Hydrogen is expected to play a critical role as the world moves to reduce its dependence on fossil fuels over the coming years. The market for hydrogen fuel is predicted to grow over 45x to $90 billion by 2030, up from $1.8 billion today.

The Gigastack project aims to showcase how green energy can be woven into complex, industrial energy infrastructure on a massive scale and accelerate net-zero emissions progress.

To make that happen, new kinds of collaboration are vital, explained Ahsan Yousufzai, global head of business development for energy surface at NVIDIA, during a conversation about the project in an on-demand panel discussion at NVIDIA GTC.

“To meet global sustainability targets, the entire energy ecosystem needs to work together,” Yousufzai said. “For that, technologies like AI and digital twins will play a major role.”

The system — now in the planning stages — will draw power from Ørsted’s massive Hornsea 1,218-megawatt offshore wind farm, the largest in the world upon its completion in January last year.

Hornsea will be connected to ITM Power’s Gigastack electrolyzer facility, which will use electrolysis to turn water into clean, renewable hydrogen fuel.

That fuel, in turn, will be put to work at Phillips 66’s Humber refinery, decarbonizing one of the U.K.’s largest industrial facilities.

The project is unique because of its scale — with plans to eventually ramp up Gigastack into a massive 1-gigawatt electrolyzer system — and because of its potential to become a blueprint for deploying electrolyzer technology for wider decarbonization.

Weaving all these elements together, however, requires tight collaboration between team members from Element Energy, ITM Power, Ørsted, Phillips 66 and Worley.

Worley — one of the largest global providers of engineering and professional services to the oil, gas, mining, power and infrastructure industries — turned to Aspen Technology’s Aspen OptiPlant, sophisticated software that’s a workhouse for planning and optimizing some of the world’s most complex infrastructure.

“When you have a finite amount of money to be spent, you want to maximize the number of options on how facilities can be designed, fabricated and constructed,” explained Vishal Mehta, senior vice president at Worley.

“This is the importance of rapid optioneering, where you’re able to run AI models and engines with not only mathematics but also visual representation,” Mehta said. “People can come up with ideas and, in real time, move them around with mathematical equations changing in the background.”

Worley relied on AspenTech’s OptiPlant to develop a 3D conceptual layout of the Gigastack green hydrogen project. The industrial optimization software combines decades of process modeling expertise with cutting-edge AI and machine learning.

The next step: connecting OptiPlant’s sophisticated physics-based plant piping and layout capabilities to build a 3D conceptual layout of the plant with Omniverse, potentially allowing teams to work together on plant design in real-time — connecting their various 3D software tools, datasets and teams together.

“With a traditional model review, it’s one person leading the way, but here we have this opportunity for everybody to be immersed in the facility,” said Sonali Singh, vice president of product management for performance engineering at AspenTech. “They can really all collaborate by looking at their individual priorities.”

Omniverse can be the platform on which they further build their digital twin of the growing facility, enabling connection of simulation data and AIs, capturing knowledge from human and AI collaborators working on the project and bringing intelligent optimization.

To learn more, watch the on-demand GTC session and explore the Gigastack project.

Find out how Siemens Gamesa and Zenotech are accelerating offshore wind farm simulations with NVIDIA’s full-stack technologies.

The post Windfall: Omniverse Accelerates Turning Wind Power Into Clean Hydrogen Fuel appeared first on NVIDIA Blog.

Read More

No Fueling Around: Designers Collaborate in Extended Reality on Porsche Electric Race Car

A one-of-a-kind electric race car revved to life before it was manufactured — or even prototyped — thanks to GPU-powered extended reality technology.

At the Automotive Innovation Forum in May, NVIDIA worked with Autodesk VRED to showcase a photorealistic Porsche electric sports car in augmented reality, with multiple attendees collaborating in the same immersive environment.

The demo delivered a life-size digital twin of the Porsche Mission R in AR and VR, which are collectively known as extended reality, or XR. Using NVIDIA CloudXR, Varjo XR-3 headsets and Lenovo Android tablets, audiences saw the virtual Porsche with photorealistic lighting and shadows.

All images courtesy of Autodesk.

Audiences could view the virtual race car side by side with a physical car on site. With this direct comparison, they witnessed the photorealistic nature of the AR model — from the color of the metals, to the surface of the tires, to the environmental lighting.

The stunning demo, which was shown through an Autodesk VRED collaborative session, ran on NVIDIA RTX-based virtual workstations.

There were two ways to view the demo. First, NVIDIA CloudXR streamed the experience to the tablets from a virtualized NVIDIA Project Aurora server, which was powered by NVIDIA A40 GPUs on a Lenovo ThinkStation SR670 Server. Attendees could also use Varjo headsets, which were locally tethered to NVIDIA RTX A6000 GPUs running on a Lenovo ThinkStation P620 workstation.

Powerful XR Technologies Behind the Streams

Up to five users at a time entered the scene, with two users wearing headsets to see the Porsche car in mixed reality, and three users on tablets to view the car in AR. Users were represented as avatars in the session.

With NVIDIA CloudXR, the forum attendees remotely streamed the photorealistic Porsche model. Built on NVIDIA RTX technology, CloudXR extends NVIDIA RTX Virtual Workstation software, which enables users to stream fully accelerated immersive graphics from a virtualized environment.

This demo used a virtualized Lenovo ThinkStation SR670 server to power NVIDIA’s Project Aurora — a software and hardware platform for XR streaming at the edge. Project Aurora delivers the horsepower of NVIDIA RTX A40 GPUs, so users could experience the rich, real-time graphics of the Porsche model from a machine room over a private 5G network.

Through server-based streaming with Project Aurora, multiple users from different locations were brought together to experience the demo in a single immersive environment. With the help of U.K.-based integrator The Grid Factory, Project Aurora is now available to be deployed in any enterprise.

Learn more about advanced XR streaming with NVIDIA CloudXR.

 

The post No Fueling Around: Designers Collaborate in Extended Reality on Porsche Electric Race Car appeared first on NVIDIA Blog.

Read More

Mission-Driven: Takeaways From Our Corporate Responsibility Report

NVIDIA’s latest corporate responsibility report shares our efforts in empowering employees and putting to work our technologies for the benefit of humanity.

Amid ongoing global economic concerns and pandemic challenges, this year’s report highlights our ability to attract and retain talent that come here to do their life’s work while tackling some of the world’s greatest technology and societal challenges.

Taking Care of Our People 

NVIDIA scored the highest grade for workplaces, ranking No. 1 on Glassdoor’s Best Places to work list for large U.S. companies. Some 95% of employees indicated they’d recommend NVIDIA to a friend.

We make the health of our employees and their families a top priority. Our family leave policy allows U.S. employees 12 weeks of fully paid leave to care for family members. And we’ve selected eight days each year in which we shut down all but essential operations globally, so employees can unwind without having to return to a full inbox.

We’ve recently added surrogacy benefits and fertility education resources to our award-winning list of family-forming benefits, which include adoption support and a generous parental leave program of up to 22 weeks of fully paid leave.

And we worked with our LGBTQ+ colleagues to expand gender affirmation resources and support.

Supporting Communities

Last year we established the Ignite program to prepare students from underrepresented communities for NVIDIA summer internships. Sixty-five percent of these students are returning for our internship program, and we saw a 100% increase in applications for this summer’s Ignite program.

We supported professional organizations, including Black Women in AI, Women in Data and Women-ai, to increase access to AI education and technology.

We launched NVIDIA Emerging Chapters, a new program that enables developers in emerging regions to build and scale their AI, data science and graphics expertise through technology access, educational resources and co-marketing opportunities.

We announced a three-year partnership with the Boys & Girls Clubs of Western Pennsylvania to expand access to AI and robotics to students in communities traditionally underrepresented in tech. Core to this is an open-source curriculum that will make it easy for Boys & Girls Clubs nationwide to deliver AI education to their students.

Our employees remained committed to donating resources to those in need, with nearly 40% of them participating in the NVIDIA Foundation’s Inspire 365 efforts during fiscal year 2022. That brought the unique participation rate since the initiative’s start to 68%.

Despite in-person volunteering remaining paused due to COVID, NVIDIANs still logged more than 16,500 volunteer hours through individual and virtual efforts, up more than 76% from the previous fiscal year.

NVIDIANs also joined the company in contributing more than $22 million to charitable causes in the last fiscal year. And during the Ukraine crisis, employees and NVIDIA have donated more than $4.6 million to date for humanitarian relief.

Developing Climate Solutions 

NVIDIA GPUs are enabling progress in responding to the crisis of climate change. With recent advances in AI, modeling of weather forecasting can now be done 4-5 magnitudes faster than with traditional computing methods.

We plan to build Earth-2, an AI supercomputer that will create a digital twin of the Earth, enabling scientists to do ultra-high-resolution climate modeling and put tools into the hands of cities and nations to simulate the impact of mitigation and adaptation strategies.

Digital twins are also being used to predict costly maintenance at power plants and model new energy sources like fusion reactor design.

NVIDIA scientists along with leading institutions are using AI to model the most efficient way to capture greenhouse gasses in the atmosphere and lock them away underground.

Startups from the NVIDIA Inception program are jumping into the climate challenge as well. In Kenya, a company is using AI to monitor the health of bee colonies. And a German startup is monitoring the ocean floor to help scientists understand how natural carbon sinks can be better utilized.

Building Energy-Efficient Technologies 

These solutions are not only bringing innovation to the climate challenge, but are built on a foundation of energy-efficient technology.

We aim to make every new generation of our GPUs faster and more energy efficient than its predecessor. As AI models and HPC applications increase exponentially in size, moving to new-generation GPUs will help our customers complete their work with lower energy consumption and get results more quickly.

NVIDIA GPUs are typically 20x more energy efficient for AI and HPC workloads than CPUs. If we switched all the CPU-only servers running AI and HPC worldwide to GPU-accelerated systems, the world could save nearly 12 trillion watt-hours of energy a year, equivalent to the electricity requirements of nearly 1.7 million U.S. homes.

Leaning Into Trustworthy AI

We’re committed to the advancement of trustworthy AI, recognizing that technology can have a profound impact on people and the world. We’ve set priorities that are rooted in fostering positive change and enabling trust and transparency in AI development.

We’re developing practices and methodologies enabling construction of AI products that are trustworthy by design, including datasets, machine learning tools and processes, AI model development, and software development and testing.

Running a Mission-Driven Company

As NVIDIA CEO Jensen Huang mentions in the opening letter of our corporate responsibility report, creating a place where people can do impactful work means building a culture strong enough to be willing to take on the most pressing problems.

The impacts of accelerated computing, which we have driven over the last two decades, are already being felt in areas as wide ranging as self-driving cars, healthcare and, increasingly, in climate change. We’re proud to have built this organization with more than 20,000 of the brightest minds and look forward to what they choose to tackle next.

The post Mission-Driven: Takeaways From Our Corporate Responsibility Report appeared first on NVIDIA Blog.

Read More

GFN Thursday Brings New Games to GeForce NOW for the Perfect Summer Playlist

Nothing beats the summer heat like GFN Thursday. Get ready for four new titles streaming at GeForce quality across nearly any device.

Buckle up for some great gaming, whether poolside, in the car for a long road trip, or in the air-conditioned comfort of home.

Speaking of summer, it’s also last call for this year’s Steam Summer Sale. Check out the special row in the GeForce NOW app for some great gaming deals before the sale ends today at 10am PDT.

Choose Your Adventure

With more than 1,300 games in the GeForce NOW library, there’s something for everyone. Single-player adventures? Check. Multiplayer battles? Got that, too. GFN Thursday brings more games each week, and it’s nearly impossible to play them all.

Catch up on titles you’ve been eyeing and put together a gaming playlist that fits the perfect summer mood. From blockbuster free-to-play action role-playing games like Genshin Impact and Lost Ark to story-driven sagas like Life is Strange: True Colors, high-speed action in NASCAR 21: Ignition and more, there are plenty of options to keep gamers busy.

GeForce NOW Ecosystem
There’s something for everyone on GeForce NOW.

Find your next adventure in the native GeForce NOW app or on play.geforcenow.com. Search for a game or genre using the top bar to build out the perfect gaming library. Streaming the game from GeForce-powered servers enables gamers to keep the action going, even on a Mac, mobile device, Chromebook and more.

Even better: RTX 3080 members can play at up to 4K resolution and 60 frames per second on PC and Mac, or take the action to the living room on the recently updated SHIELD TV. They can also take on opponents with ultra-low latency for the best gaming sessions, and RTX ON for supported titles to get the most cinematic visuals.

Press Play

Arma Reforger on GeForce NOW
Stand with the squad on the front lines in “Arma Reforger.”

Not sure where to start? Check out this week’s new additions to squad up in Arma Reforger, bring home the trophy in Matchpoint – Tennis Championships and more.

Here’s what’s coming to GeForce NOW this week:

  • Matchpoint – Tennis Championships (New release on Steam July 7)
  • Starship Troopers – Terran Command (New release on Epic Games Store July 7)
  • Sword and Fairy Inn 2 (New release on Steam, July 8)
  • Arma Reforger (Steam)

It was also announced that rFactor 2 would be coming to GeForce NOW. At this time, the title will not be coming to the service.

Finally, speaking of your summer playlist, we have a question that may get you a bit nostalgic. Let us know your answer on Twitter or in the comments below.

The post GFN Thursday Brings New Games to GeForce NOW for the Perfect Summer Playlist appeared first on NVIDIA Blog.

Read More

Wordle for AI: Santiago Valderrama on Getting Smarter on Machine Learning

Want to learn about AI and machine learning? There are plenty of resources out there to help — blogs, podcasts, YouTube tutorials — perhaps too many.

Machine learning engineer Santiago Valderrama has taken a far more focused approach to helping us all get smarter about the field.

He’s created a following by posing one machine learning question every day on his website bnomial.com.

Think of it as Wordle for those of who want to learn more about machine learning.

As Valderrama wrote on a LinkedIn post: “I got together with a couple of friends and built bnomial a site with a simple goal, a non-BS simple way to learn something new as fast as possible. We published one machine learning question every day. That’s it. You load the page, answer the question and return the next day. Rinse and repeat.”

NVIDIA AI podcast host Noah Kravitz spoke with Valderrama to talk to him about binomial, how to get smart about machine learning, and his own journey in the field.

You Might Also Like

What Is Conversational AI? ZeroShot Bot CEO Jason Mars Explains

In addition to being an entrepreneur and CEO of several startups, including Zero Shot Bot, Jason Mars is an associate professor of computer science at the University of Michigan and the author of Breaking Bots: Inventing a New Voice in the AI Revolution. He discusses how the latest AI techniques intersect with the very ancient art of conversation.

Recommender Systems 101: NVIDIA’s Even Oldridge Breaks It Down

Even Oldridge, senior manager for the Merlin team at NVIDIA, digs into how recommender systems work — and why these systems are being harnessed by companies in industries around the globe.

NVIDIA’s Jonah Alben Talks AI

Imagine building an engine with 54 billion parts. Now imagine each piece is the size of a gnat’s eyelash. That gives you some idea of the scale Jonah Alben works at. Alben is the co-leader of GPU engineering at NVIDIA. The engines he builds are GPUs — which these days do much of the heavy lifting for the latest and greatest form of computing: AI.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

The post Wordle for AI: Santiago Valderrama on Getting Smarter on Machine Learning appeared first on NVIDIA Blog.

Read More

Computer Graphics Artist Xueguo Yang Shares Fractal Art Series This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Putting art, mathematics and computers together in the mid-1980s created a new genre of digital media: fractal art.

In the NVIDIA Studio this week, computer graphics (CG) artist, educator and curator Xueguo Yang shares his insights behind fractal art — which uses algorithms to artistically represent calculations derived from geometric objects as digital images and animations.

The internationally renowned artist showcases his extraordinary fractal art series, Into the Void, and his process for creating it. Yang’s artistic collaborations include major publishing organizations and global entertainment companies, and his artwork has been selected for international A-class CG galleries and competition shortlists.

A Fractal Art Masterclass, Courtesy of NVIDIA Studio 

Yang started each Into the Void piece in Daz Studio or Autodesk 3ds Max, generating a very basic 3D shape and carefully extracting its dimensions. He then used one of his preferred fractal art applications, including Chaotica, Mandelbulb3D or, more recently, JWildfire.

Fractal artwork includes 3D mathematical shapes that are infinitely complex.

Traditionally, these 3D-heavy apps operated exclusively on CPU architecture, with limited speed and excruciating slowdowns. Newer technology using NVIDIA GeForce RTX GPUs and the OpenCL programming framework dramatically accelerates the creative process so now, complex fractal geometry can be generated, previewed and modified in seconds — a boon for Yang’s efficiency.

Graphical dynamics visual effects created using Tyflow in Autodesk 3ds Max, powered by NVIDIA PhysX.

Yang then started to build mathematical formulas to create the fractal art pieces. The formulas, ever-changing samples expressed in 3D, required random trial-and-error combinations until Yang reached a satisfactory result.

Next, he added some stylish 2D effects before importing the raw files into NVIDIA Omniverse, a 3D design collaboration and world simulation platform.

 

By using Omniverse’s NVIDIA vMaterials library, which is derived from physical, real-world materials, Yang built cosmic voids with photorealistic details such as glass and metal pieces.

Yang constantly experiments with new colors and textures to further provoke thought.

Yang further refined textures with the Adobe Substance 3D Painter Connector. He applied Smart Materials — a feature that automatically adjusts the scene to show realistic surface details — tweaking the piece until the perfect combination presented itself.

 

The Omniverse Create app allowed Yang to adjust lighting and shadows, all in original quality, for final compositing and rendering. His GeForce RTX 3080 Ti Laptop GPU powered the built-in RTX Renderer, unlocking hardware-accelerated ray tracing for fast and interactive 3D modeling.

Yang then turned to the NVIDIA Canvas app to quickly generate a variety of sky and space backgrounds. This process took mere minutes and was far more efficient than searching for backgrounds or even creating several from scratch.

In Photoshop, Yang applies the Canvas backgrounds and adjusted colors to his liking. Final exports were rapidly generated, and the Into the Void masterpiece was complete. By entering In the NVIDIA Studio, viewers can now enter the void.

Yang noted his entire creative workflow is accelerated by GPUs, with his ASUS ProArt Studio laptop serving as a necessity rather than a luxury.

“You can’t imagine how to deal without real-time ray tracing and AI acceleration of RTX GPUs,” Yang said.

Fractal Origins

For Yang, fractal artwork manifests the purest form of his introspective views on origins. “The world was originally empty,” he said. “Everything from basic particles to real matter came from the void. No one knows when, where and how things in the known world appear.”

“Into the Void” series by Xueguo Yang.

Yang hopes to give audiences a sense of déjà vu as his art deconstructs and reconstructs places, scenes, memories or any form of beauty that can often be taken for granted.

The idea of “exploring” rather than “creating” comes from Yang’s strong interests in nature, physics, philosophy and traditional Chinese medicine.

The series is a journey through time and space, tracing an origin in the void, he said.

Humanoid presence invokes the presence of Tao.

Yang intentionally adds human consciousness into the void when creating fantasy worlds.

Yang’s journey is fueled by music, especially rock and heavy metal, which strongly influences his expression with color and texture.

“Without physical media, all creation begins in the void,” Yang noted. “All the essence is just the electrons and energy shuttling in the machine and human consciousness.” Chinese culture calls this Tao, or seeking meanings in the unknown world, which is what Yang seeks to express.

CG artist, educator and curator Xueguo Yang.

Check out more of Yang’s work.

Learn more about NVIDIA Omniverse, including tips, tricks and more on the Omniverse YouTube channel. For additional support, explore the Omniverse forums or join the Discord server to chat with the community. Check out the Omniverse Twitter, Instagram and Medium page to stay up to date.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post Computer Graphics Artist Xueguo Yang Shares Fractal Art Series This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More