GeForce NOW Streaming Comes to iOS Safari

GeForce NOW Streaming Comes to iOS Safari

GeForce NOW transforms underpowered or incompatible hardware into high-performance GeForce gaming rigs.

Now, we’re bringing the world of PC gaming to iOS devices through Safari.

GeForce NOW is streaming on iOS Safari, in beta, starting today. That means more than 5 million GeForce NOW members can now access the latest experience by launching Safari from iPhone or iPad and visiting play.geforcenow.com.

Not a member? Get started by signing up for a Founders membership that offers beautifully ray-traced graphics on supported games, extended sessions lengths and front-of-the-line access. There’s also a free option for those who want to test the waters.

Right now is a great time to join. Founders memberships are available for $4.99/month or lock in an even better rate with a six-month Founders membership for $24.95.

All new GeForce RTX 30 Series GPUs come bundled with a GeForce NOW Founders membership, available to existing or new members.

Once logged in, you’re only a couple clicks away from streaming a massive catalog of the latest and most played PC games. Instantly jump into your games, like Assassin’s Creed Valhalla, Destiny 2 Beyond Light, Shadow of the Tomb Raider and more. Founders members can also play titles like Watch Dogs: Legion with RTX ON and NVIDIA DLSS, even on their iPhone or iPad.

GeForce NOW on iOS Safari requires a gamepad — keyboard and mouse-only games aren’t available due to hardware limitations. For the best experience, you’ll want to use a GeForce NOW Recommended gamepad, like the Razer Kishi.

Fortnite, Coming Soon

Alongside the amazing team at Epic Games, we’re working to enable a touch-friendly version of Fortnite, which will delay availability of the game. While the GeForce NOW library is best experienced on mobile with a gamepad, touch is how over 100 million Fortnite gamers have built, battled and danced their way to Victory Royale.

We’re looking forward to delivering a cloud-streaming Fortnite mobile experience powered by GeForce NOW. Members can look for the game on iOS Safari soon.

More Games, More Game Stores

When you fire up your favorite game, you’re playing it instantly — that’s what it means to be Game Ready on GeForce NOW. The experience has been optimized for cloud gaming and includes Game Ready Driver performance improvements.

GeForce NOW platforms

NVIDIA manages the game updates and patches, letting you play the games you own at 1080p, 60FPS across nearly all of your devices. And when a game supports RTX, Founders members can play with beautifully ray-traced graphics and DLSS 2.0 for improved graphics fidelity.

PC games become playable across a wider range of hardware, removing the barriers across platforms. With the recent release of Among Us, Mac and Chromebook owners can now experience the viral sensation, just like PC gamers. Android owners can play the newest titles even when they’re away from their rig.

More than 750 PC games are now Game Ready on GeForce NOW, with weekly additions that continue to expand the library.

Coming soon, GeForce NOW will connect with GOG, giving members access to even more games in their library. The first GOG.com games that we anticipate supporting are CD PROJEKT RED’s Cyberpunk 2077 and The Witcher 3: Wild Hunt.

The GeForce-Powered Cloud

Founders members have turned RTX ON for beautifully ray-traced graphics, and can run games with even higher fidelity thanks to DLSS. That’s the power to play that only a GeForce-powered cloud gaming service can deliver.

The advantages also extend to improving quality of service. By developing both the hardware and software streaming solutions, we’re able to easily integrate new GeForce technologies that reduce latency on good networks, and improve fidelity on poor networks.

These improvements will continue over time, with some optimizations already available and being felt by GeForce NOW members today.

Chrome Wasn’t Built in a Day

The first webRTC client running GeForce NOW was the Chromebook beta in August. In the months since, we’ve seen over 10 percent of gameplay in the Chrome web-based client.

GeForce NOW on Chrome

Soon we’ll bring that experience to more Chrome platforms, including Linux, PC, Mac and Android. Stay tuned for updates as we approach a full launch early next year.

Expanding to New Regions

The GeForce NOW Alliance continues to spread cloud gaming across the globe. The alliance is currently made up of LG U+ and KDDI in Korea, SoftBank in Japan, GFN.RU in Russia and Taiwan Mobile, which launched out of beta on Nov. 7.

In the weeks ahead, Zain KSA, the leading 5G telecom operator in Saudi Arabia, will launch its GeForce NOW beta for gamers, expanding cloud gaming into another new region.

More games. More platforms. Legendary GeForce performance. And now streaming on iOS Safari. That’s the power to play that only GeForce NOW can deliver.

The post GeForce NOW Streaming Comes to iOS Safari appeared first on The Official NVIDIA Blog.

Read More

Anatomical Adventures in VR: University’s Science Visualizations Tap NVIDIA CloudXR and 5G

Anatomical Adventures in VR: University’s Science Visualizations Tap NVIDIA CloudXR and 5G

5G networks are poised to transform the healthcare industry, starting with how medical students learn.

The Grid Factory, a U.K.-based provider of NVIDIA GPU-accelerated services, is partnering with telecommunications company Vodafone to showcase the potential of 5G technology with a network built at Coventry University.

Operating NVIDIA CloudXR on the private 5G network, student nurses and healthcare professionals can experience lessons and simulations in virtual reality environments.

With NVIDIA CloudXR, users don’t need to be physically tethered to a high-performance computer that drives rich, immersive environments. Instead, it runs on NVIDIA servers located in the cloud or on premises, which deliver the advanced graphics performance needed for wireless virtual, augmented or mixed reality environments — which collectively are known as XR.

Streaming high-resolution graphics over 5G promises higher-quality, mobile-immersive VR for more engaging experiences in remote learning. Using CloudXR enables lecturers to teach in VR while students can access the interactive environment through smartphones, tablets, laptops, VR headsets and AR glasses.

All images courtesy of Gamoola/The Original Content Company.

A member of the NVIDIA CloudXR early access program, The Grid Factory is helping organizations realize new opportunities to deliver high-quality graphics over 5G.

“CloudXR makes the experience so natural that lecturers can easily forget about the technology, and instead focus on learning points and content they’re showing,” said Ben Jones, CTO at The Grid Factory.

With Coventry University’s advanced VR technology, users can now take virtual tours through the human body. Medical students can enter the immersive environment and visualize detailed parts of the body, from the bones, muscles and the brain, to the heart, veins, vessels and blood cells.

Previously, lecturers would have to use pre-recorded materials, but this only allowed them to view the body in a linear, 2D format. Working with Vodafone, The Grid Factory installed NVIDIA CloudXR at the university, enabling lecturers to guide their students on interactive explorations of the human body in 3D models.

“With 5G, we can put the VR headset on and stream high-resolution images and videos remotely anywhere in the world,” said Natasha Taylor, associate professor in the School Of Nursing, Midwifery, and Health at Coventry University . “This experience allows us to take tours of the human body in a way we’ve never been able to before.”

The lessons have turned flat asynchronous learning into cinematic on-demand learning experiences. Students can tune in virtually to study high-resolution, 3D visualizations of the body at any time.

The immersive environments can also show detailed simulations of viral attacks, providing more engaging content that allows students to visualize and retain information faster, according to the faculty staff at Coventry.

And while the lecturers provide the virtual lessons, students can ask questions throughout the presentation.

With 5G, CloudXR can provide lower-latency immersive experiences, and VR environments can become more natural for users. It has allowed lecturers to demonstrate more easily, and for medical students to better visualize parts of the human body.

“The lower the latency, the closer you are to real-life experience,” said Andrea Dona, head of Networks at Vodafone UK. “NVIDIA CloudXR is a really exciting new software platform that allows us to stream high-quality virtual environments directly to the headset, and is now being deployed in a 5G network for the first time commercially.”

More faculty members have expressed interest in the 5G-enabled NVIDIA CloudXR experiences, especially for engineering and automotive use cases, which involve graphics-intensive workloads.

Learn more about NVIDIA CloudXR.

The post Anatomical Adventures in VR: University’s Science Visualizations Tap NVIDIA CloudXR and 5G appeared first on The Official NVIDIA Blog.

Read More

Take the A100 Train: HPC Centers Worldwide Jump Aboard NVIDIA AI Supercomputing Fast Track

Take the A100 Train: HPC Centers Worldwide Jump Aboard NVIDIA AI Supercomputing Fast Track

Supercomputing centers worldwide are onboarding NVIDIA Ampere GPU architecture to serve the growing demands of heftier AI models for everything from drug discovery to energy research.

Joining this movement, Fujitsu has announced a new exascale system for Japan-based AI Bridging Cloud Infrastructure (ABCI), offering 600 petaflops of performance at the National Institute of Advanced Industrial Science and Technology.

The debut comes as model complexity has surged 30,000x in the past five years, with booming use of AI in research. With scientific applications, these hulking datasets can be held in memory, helping to minimize batch processing as well as to achieve higher throughput.

To fuel this next research ride, NVIDIA Monday introduced the NVIDIA A100 80GB GPU with HBM2e technology. It doubles the A100 40GB GPU’s high-bandwidth memory to 80GB and delivers over 2 terabytes per second of memory bandwidth.

New NVIDIA A100 80GB GPUs let larger models and datasets run in-memory at faster memory bandwidth, enabling higher compute and faster results on workloads. Reducing internode communication can boost AI training performance by 1.4x with half the GPUs.

NVIDIA also introduced new NVIDIA Mellanox 400G InfiniBand architecture, doubling data throughput and offering new in-network computing engines for added acceleration.

Europe Takes Supercomputing Ride

Europe is leaping in. Italian inter-university consortium CINECA announced the Leonardo system, the world’s fastest AI supercomputer. It taps 14,000 NVIDIA Ampere architecture GPUs and NVIDIA Mellanox InfiniBand networking for 10 exaflops of AI. France’s Atos is set to build it.

Leonardo joins a growing pack of European systems on NVIDIA AI platforms supported by the EuroHPC initiative. Its German neighbor, the Jülich Supercomputing Center, recently launched the first NVIDIA GPU-powered AI exascale system to come online in Europe, delivering the region’s most powerful AI platform. The new Atos-designed Jülich system, dubbed JUWELS, is a 2.5 exaflops AI supercomputer that captured No. 7 on the latest TOP500 list.

Those also getting on board include Luxembourg’s MeluXina supercomputer; IT4Innovations National Supercomputing Center, the most powerful supercomputer in the Czech Republic; and the Vega supercomputer at the Institute of Information Science in Maribor, Slovenia.

Linköping University is planning to build Sweden’s fastest AI supercomputer, dubbed BerzeLiUs, based on the NVIDIA DGX SuperPOD infrastructure. It’s expected to provide 300 petaflops of AI performance for cutting-edge research.

NVIDIA is building Cambridge-1, an 80-node DGX SuperPOD with 400 petaflops of AI performance. It will be the fastest AI supercomputer in the U.K. It’s planned to be used in collaborative research within the country’s AI and healthcare community across academia, industry and startups.

Full Steam Ahead in North America

North America is taking the exascale AI supercomputing ride. NERSC (the U.S. National Energy Research Scientific Computing Center) is adopting NVIDIA AI for projects on Perlmutter, its system packing 6,200 A100 GPUs. NERSC now lays claim to 3.9 exaflops of AI performance.

NVIDIA Selene, a cluster based on the DGX SuperPOD, provides a public reference architecture for large-scale GPU clusters that can be deployed in weeks. The NVIDIA DGX SuperPOD system landed the top spot on the Green500 list of most efficient supercomputers, achieving a new world record in power efficiency of 26.2 gigaflops per watt, and it has set eight new performance milestones for MLPerf inference.

The University of Florida and NVIDIA are building the world’s fastest AI supercomputer in academia, aiming to deliver 700 petaflops of AI performance. The partnership puts UF among leading U.S. AI universities, advances academic research and helps address some of Florida’s most complex challenges.

At Argonne National Laboratory, researchers will use a cluster of 24 NVIDIA DGX A100 systems to scan billions of drugs in the search for treatments for COVID-19.

Los Alamos National Laboratory, Hewlett Packard Enterprise and NVIDIA are teaming up to deliver next-generation technologies to accelerate scientific computing.

All Aboard in APAC

Supercomputers in APAC will also be fueled by NVIDIA Ampere architecture. Korean search engine NAVER and Japanese messaging service LINE are using a DGX SuperPOD built with 140 DGX A100 systems with 700 petaflops of peak AI performance to scale out research and development of natural language processing models and conversational AI services.

The Japan Agency for Marine-Earth Science and Technology, or JAMSTEC, is upgrading its Earth Simulator with NVIDIA A100 GPUs and NVIDIA InfiniBand. The supercomputer is expected to have 624 petaflops of peak AI performance with a maximum theoretical performance of 19.5 petaflops of HPC performance, which today would rank high among the TOP500 supercomputers.

India’s Centre for Development of Advanced Computing, or C-DAC, is commissioning the country’s fastest and largest AI supercomputer, called PARAM Siddhi – AI. Built with 42 DGX A100 systems, it delivers 200 exaflops of AI performance and will address challenges in healthcare, education, energy, cybersecurity, space, automotive and agriculture.

Buckle up. Scientific research worldwide has never enjoyed such a ride.

The post Take the A100 Train: HPC Centers Worldwide Jump Aboard NVIDIA AI Supercomputing Fast Track appeared first on The Official NVIDIA Blog.

Read More

NVIDIA, Ampere Computing Raise Arm 26x in Supercomputing

NVIDIA, Ampere Computing Raise Arm 26x in Supercomputing

In the past 18 months, researchers have witnessed a whopping 25.5x performance boost for Arm-based platforms in high performance computing, thanks to the combined efforts of the Arm and NVIDIA ecosystems.

Many engineers deserve a round of applause for the gains.

  • The Arm Neoverse N1 core gave systems-on-a-chip like Ampere Computing’s Altra an estimated 2.3x improvement over last year’s designs.
  • NVIDIA’s A100 Tensor Core GPUs delivered its largest ever gains in a single generation.
  • The latest platforms upshifted to more and faster cores, input/output lanes and memory.
  • And application developers tuned their software with many new optimizations.

As a result, NVIDIA’s Arm-based reference design for HPC, with two Ampere Altra SoCs and two A100 GPUs, just delivered 25.5x the muscle of the dual-SoC servers researchers were using in June 2019. Our GPU-accelerated, Arm-based reference platform alone saw a 2.5x performance gain in 12 months.

The results span applications — including GROMACS, LAMMPS, MILC, NAMD and Quantum Espresso — that are key to work like drug discovery, a top priority during the pandemic. These and many other applications ready to run on Arm-based systems are available in containers on NGC, our hub for GPU-accelerated software.

Companies and researchers pushing the limits in areas such as molecular dynamics and quantum chemistry can harness these apps to drive advances not only in basic science but in fields such as healthcare.

Under the Hood with Arm and HPC

The latest reference architecture marries the energy-efficient throughput of Ampere Computing’s Mt. Jade, a 2U-sized server platform, with NVIDIA’s HGX A100 that’s already accelerating several supercomputers around the world. It’s the successor to a design that debuted last year based on the Marvell ThunderX2 and NVIDIA V100 GPUs.

Mt. Jade consists of two Ampere Altra SoCs packing 80 cores each based on the Arm Neoverse N1 core, all running at up to 3 GHz. They provide a whopping 192 PCI Express Gen4 lanes and up to 8TB of memory to feed two A100 GPUs.

Ampere Computing Mt. Jade reference design
The Mt. Jade server platform supports 192 PCIe Gen4 lanes.

The combination creates a compelling node for next-generation supercomputers. Ampere Computing has already attracted support from nine original equipment and design manufacturers and systems integrators, including Gigabyte, Lenovo and Wiwynn.

A Rising Arm HPC Ecosystem

In another sign of an expanding ecosystem, the Arm HPC User Group hosted a virtual event ahead of SC20 with more than three dozen talks from organizations including AWS, Hewlett Packard Enterprise, the Juelich Supercomputing Center, RIKEN in Japan, and Oak Ridge and Sandia National Labs in the U.S. Most of the talks are available on its YouTube channel.

In June, Arm made its biggest splash in supercomputing to date. That’s when the Fugaku system in Japan debuted at No. 1 on the TOP500 list of the world’s fastest supercomputers with a stunning 415.5 petaflops using the Arm-based A64FX CPU from Fujitsu.

At the time it was one of four Arm-powered supercomputers on the list, and the first using Arm’s Scalable Vector Extensions, technology embedded in Arm’s next-generation Neoverse designs that NVIDIA will support in its software.

Meanwhile, AWS is already running in the cloud HPC jobs like genomics, financial risk modeling and computational fluid dynamics on its Arm-based Graviton2 processors.

NVIDIA Accelerates Arm in HPC

Arm’s growing HPC presence is part of a broad ecosystem of 13 million developers in areas that span smartphones to supercomputers. It’s a community NVIDIA aims to expand with our deal to acquire Arm to create the world’s premier company for the age of AI.

We’re extending the ecosystem with Arm support built into our NVIDIA AI, HPC, networking and graphics software. At last year’s supercomputing event, NVIDIA CEO Jensen Huang announced our work accelerating Arm in HPC in addition to our ongoing support for IBM POWER and x86 architectures.

Nvidia support for Arm ecosystem
Nvidia has expanded its support for the Arm ecosystem.

Since then, we’ve announced our BlueField-2 DPUs that use Arm IP to accelerate and secure networking and storage jobs for cloud, embedded and enterprise applications. And for more than a decade, we’ve been an avid user of Arm designs inside products such as our Jetson Nano modules for robotics and other embedded systems.

We’re excited to be part of dramatic performance gains for Arm in HPC. It’s the latest page in the story of an open, thriving Arm ecosystem that keeps getting better.

Learn more in the NVIDIA SC20 Special Address.

The post NVIDIA, Ampere Computing Raise Arm 26x in Supercomputing appeared first on The Official NVIDIA Blog.

Read More

Changing Times: How the World’s TOP500 Supercomputers Don’t Just Have to Be Fast, But Smart

Changing Times: How the World’s TOP500 Supercomputers Don’t Just Have to Be Fast, But Smart

The world’s fastest supercomputers aren’t just faster than ever. They’re smarter and support a greater variety of workloads, too.

Nearly 70 percent of the machines, including eight of the top 10, on the latest TOP500 list of the world’s fastest supercomputers, released today at SC20, are powered by NVIDIA technology.

In addition, four of the nominations for the Gordon Bell Prize, supercomputing’s most prestigious award — to be named this week at SC20 — use AI to drive their discoveries.

The common thread: our end-to-end HGX AI supercomputing platform, which accelerates scientific computing, data analytics and AI workloads. It’s a story that begins with a great chip and extremely fast, smart networking, but ultimately is all about NVIDIA’s globally adopted data-center-scale platform for doing great science.

The shift to incorporating AI into HPC, and a platform that extends beyond traditional supercomputing centers, represents a significant change in a field that, since Seymour Cray’s CDC 6600 was launched in 1964, has focused on harnessing ever larger, more powerful machines for compute-intensive simulation and modeling.

The latest TOP500 list is about more than high-performance Linpack results:

  • Speed records: Measured by the traditional benchmark of supercomputing performance — the speed it takes to do operations in a double-precision floating-point format called FP64 — NVIDIA technologies accelerate the world’s fastest clusters, powering eight of the top 10 machines. This includes the No. 5 system — NVIDIA’s own Selene supercomputer, the world’s most powerful commercial system — as well as new additions like JUWELS (Forschungszentrum Jülich) at No. 7 and Dammam-7 (Saudi Aramco) at No. 10.
  • “Smarts” records: When measured by HPL-AI, the mixed-precision standard that’s the benchmark for AI performance, NVIDIA-powered machines captured top spots on the list with Oak Ridge National Lab’s Summit supercomputer at 0.55 exaflops and NVIDIA Selene at 0.25 exaflops.
  • Green records: The NVIDIA DGX SuperPOD system captured the top spot on the Green500 list of most efficient supercomputers, achieving a new world record in power efficiency of 26.2 gigaflops per watt. Overall, NVIDIA-powered machines captured 25 of the top 30 spots on the list.

The Era of AI Supercomputing  Is in High Gear

Maybe the most impressive achievement: We’ve surpassed exascale computing well ahead of schedule.

In October, Italy’s CINECA supercomputing center unveiled plans to build Leonardo, the world’s most powerful AI supercomputer with an expected 10 exaflops of AI performance. It’s joined by a wave of new EuropHPC AI systems in the Czech Republic, Luxembourg and Slovenia. More are coming not only in Europe but also across Asia and North America.

That’s because modern AI harnesses the incredible parallel processing power of NVIDIA GPUs, NVIDIA CUDA-X libraries and NVIDIA Mellanox InfiniBand — the world’s only smart, fully accelerated in-network computing platform — to pour vast quantities of data into advanced neural networks, creating sophisticated models of the world around us. This lets scientists tackle much more ambitious projects than would otherwise be possible.

Take the example of the team from Lawrence Berkeley National Laboratory’s Computational Research Division, one of this year’s Gordon Bell Prize nominees. Thanks to AI, the team was able to increase the scale of their molecular dynamics simulation by at least 100x compared to the largest system simulated by previous nominees.

It’s About Advancing Science

Of course, it’s not just how fast your system is, but what you do with it in the real world that counts.

That’s why you’ll find the new breed of AI-powered being thrown into the front-line of the fight against COVID.

Three of the four of the nominees for a special Gordon Bell award focused tackling the COVID-19 pandemic rely on NVIDIA AI.

On the Lawrence Livermore National Laboratory’s Sierra supercomputer — No. 3 on the TOP500 list — a team trained an AI able to identify new drug candidates on 1.6 billion compounds in just 23 minutes.

On Oak Ridge’s Summit supercomputer — No. 2 on the TOP500 list — another team harnessed 27,612 NVIDIA GPUs to test 19,028 potential drug compounds on two key SARS-CoV-2 protein structures every second.

Another team used Summit to create an AI-driven workflow to model how the SARS-CoV-2 spike protein, the main viral infection machinery, attacks the human ACE2 receptor.

Thanks to the growing ubiquity of the scalable NVIDIA HGX AI supercomputing platform — which includes everything from processors to networking and software — scientists can run their workloads in the hyperscale data centers of cloud computing companies, as well as in supercomputers.

It’s a unified platform, enabling the fusion of high-performance computing, data analytics and AI workloads. With 2.3 million developers and supporting over 1,800 accelerated apps, all AI frameworks and popular data analytics frameworks including DASK and Spark, the platform enables scientists and researchers to be instantly productive on GPU-powered x86, Arm and Power systems.

In addition, the NVIDIA NGC catalog offers performance-optimized containers for the latest versions of HPC and AI applications. So scientists and researchers can deploy quickly and stay focused on advancing their science.

Learn more in the live NVIDIA SC20 Special Address at 3 p.m. PT today.

The post Changing Times: How the World’s TOP500 Supercomputers Don’t Just Have to Be Fast, But Smart appeared first on The Official NVIDIA Blog.

Read More

Fast and Furio: Artist Accelerates Rendering on the Go with NVIDIA RTX Laptop

Fast and Furio: Artist Accelerates Rendering on the Go with NVIDIA RTX Laptop

No traveling for work? No problem — at least not for character concept artist Furio Tedeschi.

Tedeschi has worked on films like Transformers and Bumblebee: The Movie, as well as games like Mass Effect: Andromeda from Bioware. Currently, he’s working on the upcoming Cyberpunk 2077 video game.

No stranger to working while traveling, Tedeschi often goes from one place to another for his projects. So he’s always been familiar with using a workstation that can keep up with his mobility.

But when the coronavirus pandemic hit, the travel stopped. Tedeschi had a workstation setup at home, so he adjusted quickly to remote work. However, he didn’t want to be chained to his desk, and he wanted a mobile device that could handle the complexity of his designs.

With the Lenovo ThinkPad P53 accelerated by the NVIDIA RTX 5000 GPU, Tedeschi gets the speed and performance he needs to handle massive scenes and render graphics in real time, no matter where he works from.

All images courtesy of Furio Tedeschi.

RTX-tra Boost for Remote Rendering Work

One of the biggest challenges Tedeschi faced was migrating projects, which often contained heavy scenes, from his desktop workstation to a mobile setup.

He used to reduce file sizes and renders because his previous laptop couldn’t run complex graphics workflows. But with the RTX laptop, Tedeschi has enough power to quickly migrate his scenes without slowing anything down. He no longer has to reduce scene sizes because the RTX 5000 can easily handle them at full quality across the creative applications he uses to work on scenes.

“The RTX laptop has made working on the move very comfortable for me, as it packs enough power to handle even some of the heavier scenes I use for concepts,” said Tedeschi. “Now I feel 100 percent comfortable knowing I can work on renders when I go back to traveling.”

With the RTX-powered Lenovo ThinkPad P53, he gets speed and performance that’s similar to his workstation at home. The laptop allows Tedeschi to see the models and scenes from every single view, all in real time.

When it comes to his artistic style, Tedeschi likes to focus on form and shapes, and he uses applications like ZBrush and KeyShot to create his designs. After using the RTX-powered mobile workstation, Tedeschi experienced massive rendering improvements with both applications.

With KeyShot specifically, the GPU rendering is “leaps faster and gives instant feedback through the render viewpoint,” according to Tedeschi.

The faster workflows, combined with the ability to see his concepts running in real time, allows Tedeschi to choose better angles and lighting and get closer to bringing his ideas to life in a final image. This results in reduced prep time, faster productions and even less stress.

With Keyshot, ZBrush and Photoshop all running at the same time, the laptop’s battery lasts up to two hours without being plugged in, so Tedeschi can easily get in the creative flow and work on designs from anywhere in his house without being distracted.

Learn more about Furio Tedeschi and RTX-powered mobile workstations.

The post Fast and Furio: Artist Accelerates Rendering on the Go with NVIDIA RTX Laptop appeared first on The Official NVIDIA Blog.

Read More

Hyundai Motor Group to Integrate Software-Defined AI Infotainment Powered by NVIDIA DRIVE Across Entire Fleet

Hyundai Motor Group to Integrate Software-Defined AI Infotainment Powered by NVIDIA DRIVE Across Entire Fleet

From its entry-level vehicles to premium ones, Hyundai Motor Group will deliver the latest in AI-powered convenience and safety to every new customer.

The leading global auto group, which produces more than 7 million vehicles a year, announced today that every Hyundai, Kia and Genesis model will include infotainment systems powered by NVIDIA DRIVE, with production starting in 2022. By making high-performance, energy-efficient compute a standard feature, every vehicle will include a rich, software-defined AI user experience that’s always at the cutting edge.

Hyundai Motor Group has been working with NVIDIA since 2015, developing a state-of-the-art in-vehicle infotainment system on NVIDIA DRIVE that shipped in the Genesis GV80 and G80 models last year. The companies have also been collaborating on an advanced digital cockpit for release in late 2021.

The Genesis G80

Now, the automaker is standardizing AI for all its vehicles by extending NVIDIA DRIVE throughout its entire fleet — marking its commitment to developing software-defined and constantly updateable vehicles for more intelligent transportation.

A Smarter Co-Pilot

AI and accelerated computing have opened the door for a vast array of new functionalities in next-generation vehicles.

Specifically, these software-defined AI cockpit features can be realized with a centralized, high-performance computing architecture. Traditionally, vehicle infotainment requires a collection of electronic control units and switches to perform basic functions, such as changing the radio station or adjusting temperature.

Consolidating these components with the NVIDIA DRIVE software-defined AI platform simplifies the architecture while creating more compute headroom to add new features. With NVIDIA DRIVE at the core, automakers such as Hyundai can orchestrate crucial safety and convenience features, building vehicles that become smarter over time.

The NVIDIA DRIVE platform

These capabilities include driver or occupant monitoring to ensure eyes stay on the road or exiting passengers avoid oncoming traffic. They can elevate convenience in the car by clearly providing information on the vehicle’s surroundings or recommending faster routes and nearby restaurants.

Delivering the Future to Every Fleet

Hyundai is making this new area of in-vehicle AI a reality for all of its customers.

The automaker will leverage the high-performance compute of NVIDIA DRIVE to roll out its new connected car operating system to every new Hyundai, Kia and Genesis vehicle. The software platform consolidates the massive amounts of data generated by the car to deliver personalized convenience and safety features for the vehicle’s occupants.

By running on NVIDIA DRIVE, the in-vehicle infotainment system can process the myriad vehicle data in parallel to deliver features instantaneously. It can provide these services regardless of whether the vehicle is connected to the internet, customizing to each user safely and securely for the ultimate level of convenience.

With this new centralized cockpit architecture, Hyundai Motor Group will bring AI to every new customer, offering software upgradeable applications for the entire life of its upcoming fleet.

The post Hyundai Motor Group to Integrate Software-Defined AI Infotainment Powered by NVIDIA DRIVE Across Entire Fleet appeared first on The Official NVIDIA Blog.

Read More

Accelerating Research: Texas A&M Launching Grace Supercomputer for up to 20x Boost

Accelerating Research: Texas A&M Launching Grace Supercomputer for up to 20x Boost

Texas A&M University is turbocharging the research of its scientists and engineers with a new supercomputer powered by NVIDIA A100 Tensor Core GPUs.

The Grace supercomputer — named to honor programming pioneer Grace Hopper — handles almost 20 times the processing of its predecessor, Ada.

Texas A&M’s Grace supercomputing cluster comes as user demand at its High Performance Research Computing unit has doubled since 2016. It now has more than 2,600 researchers seeking to run workloads.

The Grace system promises to enhance A&M’s research capabilities and competitiveness. It will allow A&M researchers to keep pace with current trends across multiple fields enabled by advances in high performance computing.

Researchers at Texas A&M University will have access to the new system in December. Dell Technologies is the primary vendor for the Grace system.

Boosting Research

The new Grace architecture will enable researchers to make leaps with HPC in AI and data science. It also provides a foundation for a workforce in exascale computing, which processes a billion billion calculations per second.

The Grace system is set to support the university’s researchers in drug design, materials science, geosciences, fluid dynamics, biomedical applications, biophysics, genetics, quantum computing, population informatics and autonomous vehicles.

“The High Performance Research Computing lab has a mission to infuse computational and data analysis technologies into the research and creative activities of every academic discipline at Texas A&M,” said Honggao Liu, executive director of the facility.

Research at Texas A&M University in 2019 provided $952 million in revenue for the university known for its scholarship and scientific discovery support.

Petaflops Performance

Like its namesake Grace Hopper — whose work in the 1950s led to the COBOL programming language — the new Grace supercomputing cluster will be focused on fueling innovation and making groundbreaking discoveries.

The system boosts processing up to 6.2 petaflops. A one petaflops computer can handle one quadrillion floating point operations per second (flops).

In addition to the A100 GPUs, the Grace cluster is powered by single-precision NVIDIA T4 Tensor Core GPUs and NVIDIA RTX 6000 GPUs in combination with more than 900 Dell EMC PowerEdge servers.

The system is interconnected with NVIDIA Mellanox high-speed, low-latency HDR InfiniBand fabric, enabling smart in-network computing engines for accelerated computing. It also includes 5.12PB of usable high-performance DDN storage running the Lustre parallel file system.

The post Accelerating Research: Texas A&M Launching Grace Supercomputer for up to 20x Boost appeared first on The Official NVIDIA Blog.

Read More

NVIDIA A100 Launches on AWS, Marking Dawn of Next Decade in Accelerated Cloud Computing

NVIDIA A100 Launches on AWS, Marking Dawn of Next Decade in Accelerated Cloud Computing

Amazon Web Services’ first GPU instance debuted 10 years ago, with the NVIDIA M2050. At that time, CUDA-based applications were focused primarily on accelerating scientific simulations, with the rise of AI and deep learning still a ways off.

Since then, AWS has added to its stable of cloud GPU instances, which has included the K80 (p2), K520 (g3), M60 (g4), V100 (p3/p3dn) and T4 (g4).

With its new P4d instance generally available today, AWS is paving the way for another bold decade of accelerated computing powered with the latest NVIDIA A100 Tensor Core GPU.

The P4d instance delivers AWS’s highest performance, most cost-effective GPU-based platform for machine learning training and high performance computing applications. The instances reduce the time to train machine learning models by up to 3x with FP16 and up to 6x with TF32 compared to the default FP32 precision.

They also provide exceptional inference performance. NVIDIA A100 GPUs just last month swept the MLPerf Inference benchmarks — providing up to 237x faster performance than CPUs.

Each P4d instance features eight NVIDIA A100 GPUs and, with AWS UltraClusters, customers can get on-demand and scalable access to over 4,000 GPUs at a time using AWS’s Elastic Fabric Adaptor (EFA) and scalable, high-performant storage with Amazon FSx. P4d offers 400Gbps networking and uses NVIDIA technologies such as NVLink, NVSwitch, NCCL and GPUDirect RDMA to further accelerate deep learning training workloads. NVIDIA GPUDirect RDMA on EFA ensures low-latency networking by passing data from GPU to GPU between servers without having to pass through the CPU and system memory.

In addition, the P4d instance is supported in many AWS services, including Amazon Elastic Container Services, Amazon Elastic Kubernetes Service, AWS ParallelCluster and Amazon SageMaker. P4d can also leverage all the optimized, containerized software available from NGC, including HPC applications, AI frameworks, pre-trained models, Helm charts and inference software like TensorRT and Triton Inference Server.

P4d instances are now available in US East and West, and coming to additional regions soon. The instances can be purchased as On-Demand, with Savings Plans, with Reserved Instances, or as Spot Instances.

The first decade of GPU cloud computing has brought over 100 exaflops of AI compute to the market. With the arrival of the Amazon EC2 P4d instance powered by NVIDIA A100 GPUs, the next decade of GPU cloud computing is off to a great start.

NVIDIA and AWS are making it possible for applications to continue pushing the boundaries of AI across a wide array of applications. We can’t wait to see what customers will do with it.

Visit AWS and get started with P4d instances today.

The post NVIDIA A100 Launches on AWS, Marking Dawn of Next Decade in Accelerated Cloud Computing appeared first on The Official NVIDIA Blog.

Read More

‘Marbles at Night’ Illuminates Future of Graphics in NVIDIA Omniverse

‘Marbles at Night’ Illuminates Future of Graphics in NVIDIA Omniverse

Reflections have never looked so good.

Artists are using NVIDIA RTX GPUs to take real-time graphics to the next level, creating visuals with rendered surfaces and light reflections to produce incredible photorealistic details.

The Marbles RTX technology demo, first previewed at GTC in March, ran on a single NVIDIA RTX 8000 GPU. It showcased how complex physics can be simulated in a real-time, ray-traced world.

During the GeForce RTX 30 Series launch event in September, NVIDIA CEO Jensen Huang unveiled a more challenging take on the NVIDIA Marbles RTX project: staging the scene to take place at night and illustrate the effect of hundreds of dynamic, animated lights.

Marbles at Night is a physics-based demo created with dynamic, ray-traced lights and over 100 million polygons. Built in NVIDIA Omniverse and running on a single GeForce RTX 3090 GPU, the final result showed hundreds of different light sources at night, with each marble reflecting lights differently and all happening in real time.

Beyond demonstrating the latest technologies for content creation, Marbles at Night showed how creative professionals can now seamlessly collaborate and design simulations with incredible lighting, accurate reflections and real-time ray tracing with path tracing.

Pushing the Limits of Creativity

A team of artists from NVIDIA collaborated and built the project in NVIDIA Omniverse, the real-time graphics and simulation platform based on NVIDIA RTX GPUs and Pixar’s Universal Scene Description.

Working in Omniverse, the artists were able to upload, store and access all the assets in the cloud, allowing them to easily share files across teams. They could send a link, open the file and work on the assets at the same time.

Every single asset in Marbles at Night was hand-made, modeled and textured from scratch. Marbles RTX Creative Director Gavriil Klimov bought over 200 art supplies and took reference photos of each to capture realistic details, from paint splatter to wear and tear. Texturing — a process that allows artists to transfer details from one model to another — was done entirely in Substance Painter, with multiple variations for each asset.

In Omniverse, the artists manually crafted everything in the Marbles project using RTX Renderer and a variety of creative applications like 3ds Max, Maya, Cinema 4D, ZBrush and Blender. The simulation platform enabled the creative team to view all content at the highest possible quality in real time, resulting in shorter cycles and more iterations.

Nearly a dozen people were working on the project remotely from locations as far afield as California, New York, Australia and Russia. Although the team members were located around the world, Omniverse allowed them to work on scenes simultaneously thanks to Omniverse Nucleus. Running on premises or in the cloud, the module enabled the teams to collaborate in real time across vast distances.

The collaboration-based workflow, combined with the fact the project’s assets were stored in the cloud, made it easier for everyone to access the files and edit in real time.

The final technology demo completed in Omniverse resulted in over 500GB worth of texture data, over 100 unique objects, more than 5,000 meshes and about 100 million polygons.

The Research Behind the Project

NVIDIA Research recently released a paper on the reservoir-based spatiotemporal importance resampling (ReSTIR) technique, which details how to render dynamic direct lighting and shadows from millions of area lights in real time. Inspired by this technique, the NVIDIA rendering team, led by distinguished engineer Ignacio Llamas, implemented an algorithm that allowed Klimov and team to place as many lights as they wanted for the Marbles demo, without being constrained by lighting limits.

“Before, we were limited to using less than 10 lights. But today with Omniverse capabilities using RTX, we were able to place as many lights as we wanted,” said Klimov. “That’s the beauty of it — you can creatively decide what the limit is that works for you.”

Traditionally, artists and developers achieved complex lighting using baked solutions. NVIDIA Research, in collaboration with the Visual Computing Lab at Dartmouth College, produced the research paper that dives into how artists can enable direct lighting from millions of moving lights.

The approach requires no complex light structure, no baking and no global scene parameterization. All the lights can cast shadows, everything can move arbitrarily and new emitters can be added dynamically. This technique is implemented using DirectX Ray Tracing accelerated by NVIDIA RTX and NVIDIA RT Cores.

Get more insights into the NVIDIA Research that’s helping professionals simplify complex design workflows, and learn about the latest announcement of Omniverse, now in open beta.

Additional Resources: 

The post ‘Marbles at Night’ Illuminates Future of Graphics in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More