NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class

Selecting the right laptop is a lot like trying to pick the right major. Both can be challenging tasks where choosing wrongly costs countless hours. But pick the right one, and graduation is just around the corner.

The tips below can help the next generation of artists select the ideal NVIDIA Studio laptop to maximize performance for the critical workload demands of their unique creative fields — all within budget.

Students face a wide range of creative fields — including 3D animation, video, graphic design and photography — that require differing levels of performance and raise a gauntlet of questions: What laptop specs are needed for specific workflows? How will this laptop perform for the most challenging projects? How can one future-proof their digital canvas?

Studio laptops are the perfect back-to-school tool, featuring NVIDIA RTX GPUs dedicated to accelerating 3D, video and AI features — especially 3D rendering and video editing. The most popular creative apps have been optimized to run on RTX GPUs for faster performance. These purpose-built laptops feature vivid, color-accurate displays and blazing-fast memory and storage to boost creative workflows.

An AI for Animation

3D students create new worlds from scratch — shaping 3D objects and scenes with modeling tools, bringing characters to life with animation features, and creating realistic volumetric effects such as fire and smoke.

Animators routinely work on computationally intensive projects, often across multiple creative apps at once. The right GPU is critical for 3D workflows. Select a less powerful system, and artists will be hustling to wait.

MSI Creator z16 NVIDIA Studio laptop.

Laptops equipped with a GeForce RTX 3070 Ti Laptop GPU or higher — including the MSI Creator z16, available at Best Buy — tackle demanding 3D projects in real time. These high-performance graphics cards prevent slowdowns, even while working on large projects.

Razer Blade 15 NVIDIA Studio laptop.

Students looking for an added boost can jump to a system with a GeForce RTX 3080 or 3080 Ti GPU and up to 16 gigabytes of graphics memory for extra-demanding, extra-large models. Laptops like the Razer Blade 15 with an RTX 3080 Ti Laptop GPU, available on Razer.com, provide incredible performance that takes 3D from concept to completion faster.

The benefits of a Studio laptop can be felt in the combination of hardware and software. All essential 3D apps — Adobe Substance 3D, Autodesk Maya, Chaos V-Ray, Unreal Engine and more — feature RTX-accelerated benefits. Real-time ray tracing, AI denoising in the viewport and countless other features empower interactive 3D modeling and animations without the painstaking wait for renders.

Performance is measured using various systems and will vary by model. Blender 3.1 OpenData benchmark, with the OptiX and Metal render engines (respectively).

RTX GPUs exclusively power NVIDIA Omniverse, a ground-breaking platform that interconnects existing 3D workflows. Replacing linear pipelines with live-sync creation, artists can create like never before, at unprecedented speeds. Best of all, it’s a free download, available on all Studio laptops.

Reel-ey Fast Film Projects

Gone are the endless days spent in dingy university editing bays. Future filmmakers can grasp the same power from NVIDIA GPUs that every visual effects Oscar nominee in the last 14 years has used, in portable form.

Tasked with color grading, editing 8K raw footage, adding visual and motion effects, ordering and fine-tuning content, film students will often need to experiment with styles and techniques.

Studio laptops use NVIDIA technologies — like the NVIDIA Encoder (NVENC) for faster encoding and Tensor Cores for time-saving AI features — to rapidly accelerate video work, giving students the necessary time to hone their craft.

These laptops also feature best-in-class displays with 100% sRGB that’s a perfect match for what most people will see, or 95% DCI-P3 for advanced editing to reach a wider range of colors on UHD and HDR monitors. Most models offer factory calibration and HDR options as well.

Dell XPS 15 NVIDIA Studio laptop.

Students editing up to 4K footage can pick up from Dell.com the Dell XPS 15 with a GeForce RTX 3050 Ti Laptop GPU. Or, they can make the leap to the Dell XPS 17 with a GeForce RTX 3060 Laptop GPU and larger screen.

ASUS ZenBook Pro Duo NVIDIA Studio laptop.

The proper GPU depends on a project’s needs. Film students will want the power of either a GeForce RTX 3060 or 3070 Laptop GPU to ensure they can comfortably work with up to 6K footage. The ASUS ZenBook Pro Duo 15 OLED UX582 is available from Amazon, configured with an RTX 3060, RTX 3070 or RTX 3080 Laptop GPU. The handy ScreenPad Plus 14-inch 4K matte touch screen, the Duo’s secondary display, is perfect for displaying video timelines.

For video work that’s in 8K RAW, or heavy on visual effects, the GeForce RTX 3080 Ti Laptop GPU is highly recommended. This GPU features 16 gigabytes of dedicated VRAM, like the ones recommended for 3D animation, ensuring smooth production.

Performance is measured using various systems and will vary by model. Adobe Premiere Pro export test measured through Adobe Media Encoder 14.3.2 using various 4K sequences with a combination of typical effects.

RTX GPUs feature a hardware-based encoder and decoder: NVENC and NVDEC. Offloading these compute-intensive tasks from the CPU enables industry-leading render speeds, plus smooth playback and scrubbing of high-res video. Color-correction tools in Blackmagic DaVinci Resolve are GPU accelerated, as well as 30+ visual and motion features in Adobe Premiere Pro and After Effects.

Students also have access to time-saving AI features like DaVinci Resolve’s Face Recognition for automatically tagging clips and Speedwarp to produce stunning slow motion. AI also improves production quality, like with Topaz Labs Video Enhance AI, which increases video resolution while retaining high-fidelity details with a single click.

Studio laptops with GeForce RTX 30 Series GPUs speed up every phase of the video production process.

AI-Accelerated Photography and Graphic Design

Capturing the perfect shot is only the start these days. Photography majors also have to touch up their photos — adjusting lighting and shadows, applying filters and layers, as well as tweaking other fine details. Graphic design students will perform similar tasks, with a greater emphasis on web design, graphics and more.

While many modern computers are capable of accomplishing these tasks, all of the popular applications have AI-accelerated features that can dramatically improve efficiency when paired with GeForce RTX GPUs found in NVIDIA Studio laptops.

The Microsoft Surface Laptop Studio takes on many forms.

The nicely equipped Microsoft Surface Laptop Studio offers an optional GeForce RTX 3050 Ti Laptop GPU, giving it ample power to tackle an ever-growing list of RTX-accelerated apps and AI features. Available at Microsoft.com, the boundary-pushing design lets students flex their creative muscles on the sleek 14.4-inch PixelSense Flow touch screen. Its versatile design ensures photographers can touch up their photos, their way.

Lenovo Yoga Slim 7i Pro X Studio laptop

Lenovo’s Yoga Slim 7i Pro X with a GeForce RTX 3050 Laptop GPU is where powerful performance meets real portability. Great for on-the-go workflows, the laptop handles all photography and graphic design tasks with ease. Its 100% sRGB color space and 100% color-volume display is calibrated for true-to-life Delta E<1 accuracy. The stylish, ultra-slim laptop weighs about three pounds and has an impressive battery life for those long classroom lectures.

AI features reduce repetitious tasks for students on the go. Adobe Photoshop’s RTX-accelerated Super Resolution uses AI to upscale images with higher quality than standard methods. Photoshop Lightroom’s Enhance Details feature refines fine color details of high-resolution RAW images.

Features like Select Subject, which isolates people, and Select Sky, which captures skylines, are massive time-savers. Consider Topaz Lab’s collection of AI apps that denoise, increase resolution and sharpen images with a click.

Systems for STEM

The science, technology, engineering and mathematics fields are growing in just about every way imaginable. More jobs, applications, majors and students are centering on STEM.

Studio laptops are equipped with NVIDIA RTX GPUs that provide acceleration for the top engineering, computer science, data science and economics applications. With real-time rendering for complex designs and simulations, faster image and signal processing, and the ability to develop larger, more accurate AI and data science models, students can spend more time learning and less time waiting.

HP Envy 16 NVIDIA Studio laptop.

Ansys Discovery’s Live Simulation mode only runs on NVIDIA GPUs, like the HP Envy 16 available at HP.com and Acer ConceptD 5 Pro from B&H. Both Studio laptops come equipped with a GeForce RTX 3060 Laptop GPU.

Acer ConceptD 5 Pro NVIDIA Studio laptop.

Engineering students looking for additional computer power can upgrade to a Studio laptop with an RTX 3080 Ti Laptop GPU to run SOLIDWORKS up to 8x faster than with the average GPU. The same system is capable of running RAPIDS — GPU-accelerated data analytics and machine learning — up to 15x faster than an average laptop, while TensorFlow and ResNet50 training clocks in at a whopping 50x faster.

Get Your Game On

NVIDIA Studio laptops accelerate coursework, giving students loads of free time back — time that could be spent gaming. And Studio laptops come equipped with all the GeForce RTX benefits for gaming, delivering the most realistic and immersive graphics, increased frame rates with DLSS, the lowest system latency for competitive gaming, and stutter-free, top-quality live streaming.

Live streaming has become all the rage, and having a system that can keep up is essential. Fortunately, GeForce RTX Laptop GPUs scale with dream streams.

The GIGABYTE AERO 15 NVIDIA Studio laptop.

Studio laptops with a GeForce RTX 3070 or 3070 Ti Laptop GPU are the sweet spot for live streaming, offering 1080p streams while gaming in 1440p with real-time ray tracing and DLSS. The GIGABYTE AERO 15 is available in a variety of configurations from Amazon, giving creative gamers a range of options.

The NVIDIA Broadcast app, free to download for RTX GPU owners, has several powerful AI features. Audio effects such as noise and echo removal, paired with visual effects like virtual background and auto frame, deliver professional-grade visuals with a single click.

For a break from class, Studio laptops deliver phenomenal PC gaming experiences.

The NVIDIA Studio Advantage 

Studio laptops come with a serious advantage for content creators and students — NVIDIA Studio technology to speed up content creation, including AI tools for editing and exclusive apps that elevate content.

Studio laptops come equipped with the most advanced hardware for content creation, powered by RTX GPUs with dedicated hardware for 3D workflows, video and AI.

The Studio suite of apps, exclusively available and free to RTX GPU owners, includes NVIDIA Omniverse for collaborative 3D editing, Broadcast for live streaming AI tools and Canvas for painting beautiful landscapes with AI.

Studio laptops and popular creative apps are supported by NVIDIA Studio Drivers — which come preinstalled to optimize creative apps and are extensively tested to deliver maximum reliability.

Artists looking to sharpen their skills can also access the NVIDIA Studio YouTube channel, an ever-growing collection of step-by-step tutorials from renowned artists, inspiring community showcases and more, assisting in content-creation education.

And, for a limited time, creators can purchase a Studio laptop and get Adobe Creative Cloud free for three months — a $238 value. This offer is valid for new and existing customers.

Learn how Studio systems take content creation to the next level. Check out the compare GPU page for a deeper dive, including options for professionals.

Check out the weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Get updates directly by signing up for the NVIDIA Studio newsletter.

The post NVIDIA Studio Laptops Offer Students AI, Creative Capabilities That Are Best in… Class appeared first on NVIDIA Blog.

Read More

How’s That? Startup Ups Game for Cricket, Football and More With Vision AI

Sports produce a slew of data. In a game of cricket, for example, each play generates millions of video-frame data points for a sports analyst to scrutinize, according to Masoumeh Izadi, managing director of deep-tech startup TVConal.

The Singapore-based company uses NVIDIA AI and computer vision to power its sports video analytics platform, which enables users — including sports teams, leagues and TV broadcasters — to gain performance insights from these massive amounts of data in real time.

Short for Television Content Analytics, TVConal provides video analytics for a variety of sports, with a focus on cricket, tennis, badminton and football.

Its platform — powered by the NVIDIA Metropolis application framework for vision AI — can detect important in-game events, model athlete behavior, make movement predictions and more. It all helps dissect the minute details in sports, enabling teams to make smarter decisions on the field.

TVConal is a member of NVIDIA Inception, a free program that supports startups revolutionizing industries with cutting-edge technology.

Automated Match Tagging

Match tagging — creating a timeline of significant in-game events — is crucial to sports video analytics. Tags are used to generate detailed reports that provide performance statistics and visual feedback for referees, coaches, athletes and fans.

Since plays and other in-game events occur in mere instants, up to 20 loggers work together to accomplish live tagging for some sports matches, according to Izadi. This can be time consuming and labor intensive.

With TVConal’s platform, sports analysts can extract insights from video frames with just a few clicks — as AI helps to automatically and accurately tag matches in real time. This gives analysts the time to dig deeper into the data and provide more detailed feedback for teams.

The platform can also catch critical moments or foul plays that the naked eye might miss.

“If a player does an illegal action that’s beyond a human’s ability to process in a few milliseconds, the platform can detect that and inform the umpires to take an action just in time,” Izadi said.

TVConal’s platform is built using NVIDIA Metropolis, which simplifies the development, deployment and scale of AI-enabled video analytics applications from edge to cloud. Metropolis includes pretrained models, training and optimization tools, software development kits, CUDA-X libraries and more — all optimized to run on NVIDIA-Certified Systems based on the NVIDIA EGX enterprise platform for accelerated computing.

“NVIDIA’s software tools, frameworks and hardware allow us to iterate faster and bring ideas to market with shortened life cycles and reduced costs,” Izadi said.

NVIDIA GPU-accelerated compute resources used in TVConal’s platform include the NVIDIA Jetson platform for AI at the edge, RTX 3090 workstations on-prem and Tesla V100 and A100 in the cloud.  

TVConal uses the NVIDIA DeepStream SDK to simplify video processing pipelines; NVIDIA pretrained models and the TAO toolkit to accelerate AI training; and the NVIDIA TensorRT SDK to optimize inference.

DeepStream enabled the TVConal team to process live video and audio streams in real time — the necessary speed to match video frame rates. In addition, the TensorRT library helped TVConal convert its machine learning models to more quickly process data, while maintaining accuracy.

And as a member of NVIDIA Inception, TVConal has access to technical resources, industry experts and go-to-market support.

The company’s clients include international production company NEP Group, the Pakistan Cricket Board and others.

“There is an increasing volume of sports content to extract value from,” said Izadi, highlighting that the global sports analytics market size is expected to grow over 20% by 2028. “Automated video processing is revolutionary in sports, and we are excited to build more advanced models and pipelines to keep the revolution going.”

More innovative players worldwide are using NVIDIA Metropolis for sports analytics, including startups Pixellot, Track160 and Veo.

Watch an on-demand NVIDIA GTC session about how AI is revolutionizing the sports industry — better predicting competition outcomes, improving performance and increasing viewers’ quality expectations.

Learn more about NVIDIA Metropolis and apply to join NVIDIA Inception.

The post How’s That? Startup Ups Game for Cricket, Football and More With Vision AI appeared first on NVIDIA Blog.

Read More

What Is an Exaflop?

Computers are crunching more numbers than ever to crack the most complex problems of our time — how to cure diseases like COVID and cancer, mitigate climate change and more.

These and other grand challenges ushered computing into today’s exascale era when top performance is often measured in exaflops.

So, What’s an Exaflop?

An exaflop is a measure of performance for a supercomputer that can calculate at least 1018 or one quintillion floating point operations per second.

In exaflop, the exa- prefix means a quintillion, that’s a billion billion, or one followed by 18 zeros. Similarly, an exabyte is a memory subsystem packing a quintillion bytes of data.

The “flop” in exaflop is an abbreviation for floating point operations. The rate at which a system executes a flop in seconds is measured in exaflop/s.

Floating point refers to calculations made where all the numbers are expressed with decimal points.

1,000 Petaflops = an Exaflop

The prefix peta- means 1015, or one with 15 zeros behind it. So, an exaflop is a thousand petaflops.

The exaflop in historical context

To get a sense of what a heady calculation an exaflop is, imagine a billion people, each holding a billion calculators. (Clearly, they’ve got big hands!)

If they all hit the equal sign at the same time, they’d execute one exaflop.

Indiana University, home to the Big Red 200 and several other supercomputers, puts it this way: To match what an exaflop computer can do in just one second, you’d have to perform one calculation every second for 31,688,765,000 years.

A Brief History of the Exaflop

For most of supercomputing’s history, a flop was a flop, a reality that’s morphing as workloads embrace AI.

People used numbers expressed in the highest of several precision formats, called double precision, as defined by the IEEE Standard for Floating Point Arithmetic. It’s dubbed double precision, or FP64, because each number in a calculation requires 64 bits, data nuggets expressed as a zero or one. By contrast, single precision uses 32 bits.

Double precision uses those 64 bits to ensure each number is accurate to a tiny fraction. It’s like saying 1.0001 + 1.0001 = 2.0002, instead of 1 + 1 = 2.

The format is a great fit for what made up the bulk of the workloads at the time — simulations of everything, from atoms to airplanes, that need to ensure their results come close to what they represent in the real world.

So, it was natural that the LINPACK benchmark, aka HPL, that measures performance on FP64 math became the default measurement in 1993, when the TOP500 list of world’s most powerful supercomputers debuted.

The Big Bang of AI

A decade ago, the computing industry heard what NVIDIA CEO Jensen Huang describes as the big bang of AI.

This powerful new form of computing started showing significant results on scientific and business applications. And it takes advantage of some  very different mathematical methods.

Deep learning is not about simulating real-world objects; it’s about sifting through mountains of data to find patterns that enable fresh insights.

Its math demands high throughput, so doing many, many calculations with simplified numbers (like 1.01 instead of 1.0001) is much better than doing fewer calculations with more complex ones.

That’s why AI uses lower precision formats like FP32, FP16 and FP8. Their 32-, 16- and 8-bit numbers let users do more calculations faster.

Mixed Precision Evolves

For AI, using 64-bit numbers would be like taking your whole closet when going away for the weekend.

Finding the ideal lower-precision technique for AI is an active area of research.

For example, the first NVIDIA Tensor Core GPU, Volta, used mixed precision. It executed matrix multiplication in FP16, then accumulated the results in FP32 for higher accuracy.

Hopper Accelerates With FP8

More recently, the NVIDIA Hopper architecture debuted with a lower-precision method for training AI that’s even faster. The Hopper Transformer Engine automatically analyzes a workload, adopts FP8 whenever possible and accumulates results in FP32.

When it comes to the less compute-intensive job of inference — running AI models in production — major frameworks such as TensorFlow and PyTorch support 8-bit integer numbers for fast performance. That’s because they don’t need decimal points to do their work.

The good news is NVIDIA GPUs support all precision formats (above), so users can accelerate every workload optimally.

Last year, the IEEE P3109 committee started work on an industry standard for precision formats used in machine learning. This work could take another year or two.

Some Sims Shine at Lower Precision

While FP64 remains popular for simulations, many use lower-precision math when it delivers useful results faster.

Factors for HPC app performance vary
HPC apps vary in the factors that impact their performance.

For example, researchers run in FP32 a popular simulator for car crashes, LS-Dyna from Ansys. Genomics is another field that tends to prefer lower-precision math.

In addition, many traditional simulations are starting to adopt AI for at least part of their workflows. As workloads shift towards AI,  supercomputers need to support lower precision to run these emerging applications well.

Benchmarks Evolve With Workloads

Recognizing these changes, researchers including Jack Dongarra — the 2021 Turing award winner and a contributor to HPL — debuted HPL-AI in 2019. It’s a new benchmark that’s better for measuring these new workloads.

“Mixed-precision techniques have become increasingly important to improve the computing efficiency of supercomputers, both for traditional simulations with iterative refinement techniques as well as for AI applications,” Dongarra said in a 2019 blog. “Just as HPL allows benchmarking of double-precision capabilities, this new approach based on HPL allows benchmarking of mixed-precision capabilities of supercomputers at scale.”

Thomas Lippert, director of the Jülich Supercomputing Center, agreed.

“We’re using the HPL-AI benchmark because it’s a good measure of the mixed-precision work in a growing number of our AI and scientific workloads — and it reflects accurate 64-bit floating point results, too,” he said in a blog posted last year.

Today’s Exaflop Systems

In a June report, 20 supercomputer centers around the world reported their HPL-AI results, three of them delivering more than an exaflop.

One of those systems, a supercomputer at Oak Ridge National Laboratory, also exceeded an exaflop in FP64 performance on HPL.

Exaflop results on HPL-AI
A sampler of the June 2022 HPL-AI results.

Two years ago, a very unconventional system was the first to hit an exaflop. The crowd-sourced supercomputer assembled by the Folding@home consortium passed the milestone after it put out a call for help fighting the COVID-19 pandemic and was deluged with donated time on more than a million computers.

Exaflop in Theory and Practice

Since then, many organizations have installed supercomputers that deliver more than an exaflop in theoretical peak performance. It’s worth noting that the TOP500 list reports both Rmax (actual) and Rpeak (theoretical) scores.

Rmax is simply the best performance a computer actually demonstrated.

Rpeak is a system’s top theoretical performance if everything could run at its highest possible level, something that almost never really happens. It’s typically calculated by multiplying the number of processors in a system by their clock speed, then multiplying the result by the number of floating point operations the processors can perform in one second.

So, if someone says their system can do an exaflop, consider asking if that’s using Rmax (actual) or Rpeak (theoretical).

Many Metrics in the Exaflop Age

It’s another one of the many nuances in this new exascale era.

And it’s worth noting that HPL and HPL-AI are synthetic benchmarks, meaning they measure performance on math routines, not real-world applications. Other benchmarks, like MLPerf, are based on real-world workloads.

In the end, the best measure of a system’s performance, of course, is how well it runs a user’s applications. That’s a measure not based on exaflops, but on ROI.

The post What Is an Exaflop? appeared first on NVIDIA Blog.

Read More

July NVIDIA Studio Driver Improves Performance for Chaos V-Ray 6 for 3ds Max

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

Creativity heats up In the NVIDIA Studio as the July NVIDIA Studio Driver, available now, accelerates the recent Chaos V-Ray 6 for 3ds Max release.

Plus, this week’s In the NVIDIA Studio 3D artist, Brian Lai, showcases his development process for Afternoon Coffee and Waffle, a piece that went from concept to completion faster with NVIDIA RTX acceleration in Chaos V-Ray rendering software.

Chaos V-Ray Powered by NVIDIA Studio Offers a July to Remember

Visualization and computer graphics software company Chaos this month released an update to the all-in-one photorealistic rendering software, V-Ray 6 for 3ds Max. The July Studio Driver gives creators on NVIDIA GPUs the best experience.

Add procedural clouds to create beautiful custom skies with the latest V-Ray 6 release.

This release is a major upgrade that gives artists powerful new world-building and workflow tools to quickly distribute 3D objects, generate detailed 3D surfaces and add procedural clouds to create beautiful custom skies.

V-Ray GPU software — which already benefits from high-performance final-frame rendering with RTX-accelerated ray tracing and accelerated interactive rendering with AI-powered denoising — gets a performance bump, in addition to the new features.

Catch some V-Rays with an average of 2x faster GPU Light Cache calculations than V-Ray 5.

3D artists benefit from speedups in multiple ways with V-Ray 6. GPU improvements include support for nearly all new V-Ray 6 features, and enable faster Light Cache and a new Device Selector to assign rendering devices to tasks. By allowing users to specify use of the GPU for the AI denoiser, rendering performance is nearly doubled.

Adaptive dome-light rendering clocks in at up to 3x faster than V-Ray 5.

Additional key new features include:

  • Chaos Scatter — easily populate scenes with millions of 3D objects to produce natural-looking landscapes and environments without adjusting objects by hand.
  • Procedural Clouds — simulate a variety of cloud types and weather conditions, from partly cloudy to overcast.
  • Improved trace-depth workflow — simplifies the setting of trace-depth overrides for reflections and refractions.
  • Shading improvements — includes new energy-preserving GGX shader, thin film rollout in the V-Ray material for bubbles and fabrics, and improved V-Ray dirt functionality.

And there’s much more to explore. Creators can join a free V-Ray 6 for 3ds Max webinar on Thursday, July 28, to see how the new features are already helping 3D artists create fantastic visuals.

Sweet, Rendery Waffles

This week In the NVIDIA Studio, Brian Lai, a computer graphics craftsman, details the sweet, buttery-smooth creative process for Afternoon Coffee and Waffle.

Lai enjoyed the sweet satisfaction of creating with a GeForce RTX 3090 GPU and NVIDIA Studio benefits.

Lai’s journey into 3D art started at an early age, inspired by his photographer father — who Lai worked for until getting accepted into Malaysia’s top art college, The One Academy. He finds inspiration in real-world environments, observing what’s around him.

“I was always obsessed with optics and materials, so I just look at things around me, and then I want to replicate them into 3D form,” Lai said. “It’s satisfying to recreate a thing or environment to confuse my audience about whether it’s a real or 3D rendered image.”

It’s no wonder that Afternoon Coffee and Waffle takes on a picturesque quality, as typically exhibited by a social media image showing off a food adventure.

Before Lai turned on RTX.

After finding a reference shot online, Lai started his creative process with basic 3D model blocking, which helped him set a direction for the final image, as well as find a good focal length and position for the camera. Then, he finalized each 3D model, turning them into high-resolution models and texturing each separately. This allowed him to focus on the props’ details.

Lai called his experience creating with a GeForce RTX 3090 “butter smooth throughout the process.”

RTX-accelerated ray tracing happens lightning quick in V-Ray GPU with AI-powered denoising. Lai prefers V-Ray “mainly because it utilizes the power from RT and Tensor Cores on the graphics card,” he said. “I don’t really feel limited by the software.”

The anticipation of completion is so syrupy thick, one can almost taste it.

The artist also used his GPU for simulation in Autodesk Maya to get the best-looking fabrics possible, noting that “texturing in 4K preview is so satisfying.” He then finalized the models with normal map folds in Adobe Substance Painter.

Once all the models were ready, Lai gathered everything into one scene. In the above tutorial, he shows how each shader was constructed in Autodesk Maya’s Hypershade window.

The very last — and creatively infinite — step was to look dev. Here, Lai had to self-contain his tinkering as he “can do endless improvement in this stage,” he said. Ultimately, he reached the extent of realism he was aiming for and called the piece complete.

3D artist and CG craftsman Brian Lai.

Find more work from Lai, who first made his name discovering invert art, or negative drawing, on his Instagram.

Join the #ExtendtheOmniverse contest, running through Friday, Aug. 19. Perform something akin to magic by making your own NVIDIA Omniverse Extension for a chance to win an NVIDIA RTX GPU. Winners will be announced in September at GTC.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post July NVIDIA Studio Driver Improves Performance for Chaos V-Ray 6 for 3ds Max appeared first on NVIDIA Blog.

Read More

Digital Sculptor Does Heavy Lifting With Lightweight Mobile Workstation

As a professional digital sculptor, Marlon Nuñez is on a mission to make learning 3D art skills easier, smoother and more fun for all. And with the help of an NVIDIA RTX-powered Lenovo mobile workstation, he takes his 3D projects to the next level, wherever he goes.

Nuñez is the art director and co-founder of Art Heroes, a 3D art academy. Based in Spain, Nuñez specializes in creating digital humans and stylized 3D characters, a complex feat. But he tackles his demanding creative workflows from anywhere, thanks to his Lenovo ThinkPad P1 powered by the NVIDIA RTX A5000 Laptop GPU.

The speed and performance of RTX-powered technology enable Nuñez to create stunning characters in real time.

“As an artist, render times and look development are where you spend most of your time,” said Nuñez. “NVIDIA RTX allows you to work with ray tracing on, providing artists with the option to make these creative processes faster and easier.”

Powerful Performance on the Go

Nuñez says there are three main benefits he has experienced with his RTX-powered mobile workstation. First, it’s light — the Lenovo ThinkPad P1 packs the power of the ultra-high-end NVIDIA RTX A5000 laptop GPU into a thin chassis that only weighs around four pounds. Nuñez said he can easily travel with his portable workstation — he doesn’t even need a bag to carry it.

Second, the NVIDIA RTX GPU supports intense real-time ray tracing, which allows Nuñez to make photorealistic graphics and hyper-accurate designs. It also helps him tackle challenging tasks and maintain multiple workflows. From multitasking with several apps to rendering on the fly, RTX technology helps Nuñez easily keep up with creative tasks.

And lastly, the Lenovo ThinkPad P1 has a color-calibrated screen. Nuñez finds this preset feature particularly helpful, as it lets him see vibrant colors in his designs without having to worry about screen reflections.

All of these benefits make the ThinkPad P1 the ideal workstation for working under any scenario, the artist said. With the accelerated workflows it enables, as well as the ability to see his designs running in real time, Nuñez can finalize his 3D character designs faster than ever.

Image courtesy of Marlon Nuñez.

NVIDIA RTX Accelerates Creative Development

Nuñez creates extremely detailed 3D creations and characters, which means rendering and look development take a significant amount of time. RTX graphics cards enable unique ray-tracing capabilities that allow Nuñez to easily speed up his overall development process.

“I decided to test the NVIDIA RTX on my new Lenovo ThinkPad P1, and I was pretty shocked at how well it performed inside Unreal Engine 5 with ray tracing enabled,” said Nuñez. “I created the layout, played with the alembic hair and shaders, and used the sequencer on the scene — it was very responsive all the time.”

Ray tracing also enables artists to access extreme precision when it comes to life-like lighting. Because ray-tracing technology automatically renders light behavior in a physically accurate way, Nuñez doesn’t have to manually adjust render settings or complex setups.

Nuñez believes real-time ray tracing is already making a big difference across industries, especially for virtual productions and game development. With the help of an NVIDIA RTX GPU on a mobile workstation, creators can perform complex tasks in less time, from any location.

Learn more about NVIDIA RTX Laptop GPUs and watch Marlon Nuñez talk about his workflow below:

The post Digital Sculptor Does Heavy Lifting With Lightweight Mobile Workstation appeared first on NVIDIA Blog.

Read More

Shifting Into High Gear: Lunit, Maker of FDA-Cleared AI for Cancer Analysis, Goes Public in Seoul

South Korean startup Lunit, developer of two FDA-cleared AI models for healthcare, went public this week on the country’s Kosdaq stock market.

The move marks the maturity of the Seoul-based company — which was founded in 2013 and has for years been part of the NVIDIA Inception program that nurtures cutting-edge startups.

Lunit’s AI software for chest X-rays and mammograms are used in 600 healthcare sites across 40 countries. In its home market alone, around 4 million chest X-rays a year are analyzed by Lunit AI models.

Lunit has partnered with GE Healthcare, Fujifilm, Philips and Guardant Health to deploy its AI products. Last year, it achieved FDA clearance for two AI tools: one that analyzes mammograms for signs of breast cancer, and another that triages critical findings in chest X-rays. It’s also received the CE mark in Europe for these, as well as a third model that analyzes tumors in cancer tissue samples.

“By going public, which is just one step in our long journey, I strongly believe that we will succeed and accomplish our mission to conquer cancer through AI,” said Brandon Suh, CEO of Lunit.

Lunit raised $60 million in venture capital funding late last year, and its current market cap is some $320 million, based on its latest closing price. Following its recent regulatory approvals, the startup is expanding its presence in the U.S. and the European Union. It’s also developing additional AI models for 3D mammography.

Forging Partnerships to Deploy AI for Radiology, Oncology

Lunit has four AI products to help radiologists and pathologists detect cancer and deliver care:

  • INSIGHT CXR: Trained on a dataset of 3.5 million cases, this tool detects 10 of the most common findings in chest X-rays with 97-99% accuracy.
  • INSIGHT MMG: This product reduces the chance that physicians overlook breast cancer in the screening mammography by 50%.
  • SCOPE IO: Demonstrating 94% accuracy, this AI helps identify 50% more patients eligible for immunotherapy by analyzing tissue slide images of more than 15 types of cancer, including lung, breast and colorectal cancer.
  • SCOPE PD-L1: Trained on more than 1 million annotated cell images, the tool helps accurately quantify expression levels of PD-L1, a protein that influences immune response.

GE Healthcare made eight AI algorithms from INSIGHT CXR available through its Thoracic Care Suite to flag abnormalities in lung X-rays, including pneumonia, tuberculosis and lung nodules.

Fujifilm incorporated INSIGHT CXR into its AI-powered product to analyze chest X-rays. Lunit AI connects to Fujufilm’s X-ray devices and PACS imaging system, and is already used in more than 130 sites across Japan to detect chest nodules, collapsed lung, and fluid or other foreign substances in the lungs.

Philips, too, is adopting INSIGHT CXR, making the software accessible to users of its diagnostic X-ray solutions. And Guardant Health, a liquid biopsy company, made a $26 million strategic investment in Lunit to support the company’s innovation in precision oncology through the Lunit SCOPE tissue analysis products.

Accelerating Insights With NVIDIA AI

Lunit develops its AI models using various NVIDIA Tensor Core GPUs, including NVIDIA A100 GPUs, in the cloud. Its customers can deploy Lunit’s AI with an NVIDIA GPU-powered server on premises or in the cloud — or within a medical imaging device using the NVIDIA Jetson edge AI platform.

The company also uses NVIDIA TensorRT software to optimize its trained AI models for real-world deployment.

“The goal here is to optimize our AI in actual user settings — for the specific NVIDIA GPUs that operate the AI,” said Donggeun Yoo, chief of research at Lunit.

Over the years, Lunit has presented its work at NVIDIA GTC and as an NVIDIA Inception member at the prestigious RSNA conference for radiology.

“It was very helpful for us to build credibility as a startup,” said Yoo. “I believe joining Inception helped trigger the bigger acknowledgements that followed from the healthcare industry.”

Join the NVIDIA Inception community of over 10,000 technology startups, and register for NVIDIA GTC, running online Sept. 19-22, to hear more from leaders in healthcare AI.

Subscribe to NVIDIA healthcare news.

The post Shifting Into High Gear: Lunit, Maker of FDA-Cleared AI for Cancer Analysis, Goes Public in Seoul appeared first on NVIDIA Blog.

Read More

Get Battle Ready With New GeForce NOW Fortnite Reward

<Incoming Transmission> Epic Games is bringing a new Fortnite reward to GeForce NOW, available to all members. Drop from the Battle Bus in Fortnite on GeForce NOW between today and Thursday, Aug. 4, to earn “The Dish-stroyer Pickaxe” in game for free.

<Transmission continues> Members can earn this item by streaming Fortnite on GeForce NOW on their PCs, Macs, Chromebooks, SHIELD TVs and with an optimized touch experience on iOS Safari and Android mobile devices. Thanks to the power of Epic and GeForce servers, all GeForce NOW members can take the action wherever they go.

Plus, nine new titles arrive on GeForce NOW this week, joining Fortnite and 1,300+ other games streaming at GeForce quality.

<bZZZt bZZZt> Whoops, sorry there, almost lost the signal. It’s coming through loud and clear now for a jam-packed GFN Thursday.

Dish It Out With This In-Game Reward

Bring home the big wins with the free “Dish-stroyer Pickaxe,” a reward available to GeForce NOW members who stream Fortnite any time between today at noon Eastern and Thursday, Aug. 4, 11:59 p.m Eastern. Rewards will appear in accounts starting Thursday, Aug. 11. Check out this FAQ from Epic for more details on how to link your Epic account to GFN.

Fortnite Dishtroyer Reward on GeForce NOW
The look on people’s faces when they saw this in-game item was priceless. One could say the reception was incredible…

Fortnite fans can try out GeForce NOW for free to obtain this reward, and play Fortnite across all compatible GeForce NOW devices, including on mobile with intuitive touch controls, Windows PC, macOS, iOS Safari, Android phones and tablets, Android TV, SHIELD TV, 2022 Samsung TVs and select LG TV models.

All members are eligible for this in-game reward, regardless of membership tier. For members with an RTX 3080 membership, taking out opponents with “The Dish-stroyer” will feel even more victorious — with ultra-low latency, eight-hour gaming sessions and streaming in 4K resolution at 60 frames per second, or 1440p at 120 FPS on the PC and Mac apps.

With 120 FPS streaming now broadly available on 120Hz Android devices, RTX 3080 members can stream Fortnite at higher frame rates to more phones and tablets for an even more responsive experience.

Keep the Victories Rolling

Endling Extinction is Forever on GeForce NOW
Who let the fox out? Play as a mother fox and defend her three cubs in this eco-conscious adventure.

This week also adds nine new games, including 3D platformer Hell Pie. GFN members can now see “Nate the demon” and “Nugget the angel” in all their fearsome glory, powered by ray-tracing technology for more vibrant gameplay. Check out the other titles now available to stream:

Before we go, we’ve got one last message to transmit that comes with a challenge. Let us know your response on Twitter or in the comments.

The post Get Battle Ready With New GeForce NOW Fortnite Reward appeared first on NVIDIA Blog.

Read More

Researchers Use GPUs to Give Earbud Users a ‘Mute Button’ for Background Noise

Thanks to earbuds you can have calls anywhere while doing anything. The problem: those on the other end of the call hear it all, too, from your roommate’s vacuum cleaner to background conversations at the cafe you’re working from.

Now, work by a trio of graduate students at the University of Washington who spent the pandemic cooped up together in a noisy apartment, lets those on the other end of the call hear just you — rather than all the stuff going on around you.

Users found that the system, dubbed “ClearBuds” — presented last month at the ACM International Conference on Mobile Systems, Applications, and Services — improved background noise suppression much better than a commercially available alternative.

“You’re removing your audio background the same way you can remove your visual background on a video call,” explained Vivek Jayaram, a doctoral student in the Paul G. Allen School of Computer Science & Engineering.

Outlined in a paper co-authored by the three roommates, all computer science and engineering graduate students at the University of Washington — Maruchi Kim, Ishan Chatterjee, and Jayaram — ClearBuds are different from other wireless earbuds in two big ways.

The ClearBuds hardware (round disk) in front of the 3D printed earbud enclosures. Credit: Raymond Smith, University of Washington

First, ClearBuds use two microphones per earbud.

While most earbuds use two microphones on the same earbud, ClearBuds uses a microphone from both earbuds and creates two audio streams.

This creates higher spatial resolution for the system to better separate sounds coming from different directions, Kim explained. In other words, it makes it easier for the system to pick out the earbud wearer’s voice.

Second, the team created a neural network algorithm that can run on a mobile phone to process the audio streams to identify which sounds should be enhanced and which should be suppressed.

The researchers relied on two separate neural networks to do this.

The first neural network suppresses everything that isn’t a human voice.

The second enhances the speaker’s voice. The speaker can be identified because it’s coming from microphones in both earbuds at the same time.

Together, they effectively mask background noise and ensure the earbud wearer is heard loud and clear.

ClearBuds isolate a user’s voice from background noise by performing voice separation using a pair of wireless, synchronized earbuds. Source: Maruchi Kim, University of Washington

While the software the researchers created was lightweight enough to run on a mobile device, they relied on an NVIDIA TITAN desktop GPU to train the neural networks. They used both synthetic audio samples and real audio. Training took less than a day.

And the results, users reported, were dramatically better than commercially available earbuds, results that are winning recognition industrywide.

The team took second place for best paper at last month’s ACM MobSys 2022 conference. In addition to Kim, Chatterjee and Jayarm, the paper’s co-authors included Ira Kemelmacher-Shlizerman, an associate professor at the Allen School; Shwetak Patel, a professor in both the Allen School and the electrical and computer engineering department; and Shyam Gollakota and Steven Seitz, both professors in the Allen School.

Read the full paper here: https://dl.acm.org/doi/10.1145/3498361.3538933

To be sure, the system outlined in the paper can’t be adopted instantly. While many earbuds have two microphones per earbud, they only stream audio from one earbud. Industry standards are just catching up to the idea of processing multiple audio streams from earbuds.

Nevertheless, the researchers are hopeful their work, which is open source, will inspire others to couple neural networks and microphones to provide better quality audio calls.

The ideas could also be useful for isolating and enhancing conversations taking place over smart speakers by harnessing them for ad hoc microphone arrays, Kim said, and even tracking robot locations or search and rescue missions.

Sounds good to us.

Featured image credit:  Raymond Smith, University of Washington

The post Researchers Use GPUs to Give Earbud Users a ‘Mute Button’ for Background Noise appeared first on NVIDIA Blog.

Read More

Lucid Motors’ Mike Bell on Software-Defined Innovation for the Luxury EV Brand

AI and electric vehicle technology breakthroughs are transforming the automotive industry. These developments pave the way for new innovators, attracting technical prowess and design philosophies from Silicon Valley.

Mike Bell, senior vice president of digital at Lucid Motors, sees continuous innovation coupled with over-the-air updates as key to designing sustainable, award-winning intelligent vehicles that provide seamless automated driving experiences.

NVIDIA’s Katie Burke Washabaugh spoke with Bell on the latest AI Podcast episode, covering what it takes to stay ahead in the software-defined vehicle space.

Bell touched on future technology and its implications for the mass adoption of sustainable, AI-powered EVs — as well as what Lucid’s Silicon Valley roots bring to the intersection of innovation and transportation.



You Might Also Like

Driver’s Ed: How Waabi Uses AI, Simulation to Teach Autonomous Vehicles to Drive
Teaching the AI brains of autonomous vehicles to understand the world as humans do requires billions of miles of driving experience. The road to achieving this astronomical level of driving leads to the virtual world. Learn how Waabi uses powerful high-fidelity simulations to train and develop production-level autonomous vehicles.

Polestar’s Dennis Nobelius on the Sustainable Performance Brand’s Plans
Driving enjoyment and autonomous driving capabilities can complement one another in intelligent, sustainable vehicles. Learn about the automaker’s plans to unveil its third vehicle, the Polestar 3, the tech inside it, and what the company’s racing heritage brings to the intersection of smarts and sustainability.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop their own games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe to the AI Podcast: Now Available on Amazon Music

The AI Podcast is now available through Amazon Music.

In addition, get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out this listener survey.

The post Lucid Motors’ Mike Bell on Software-Defined Innovation for the Luxury EV Brand appeared first on NVIDIA Blog.

Read More

Living on the Edge: New Features for NVIDIA Fleet Command Deliver All-in-One Edge AI Management, Maintenance for Enterprises

NVIDIA Fleet Command — a cloud service for deploying, managing and scaling AI applications at the edge — today introduced new features that enhance the seamless management of edge AI deployments around the world.

With the scale of edge AI deployments, organizations can have up to thousands of independent edge locations that must be managed by IT teams — sometimes in far-flung locations like oil rigs, weather gauges, distributed retail stores or industrial facilities.

NVIDIA Fleet Command offers a simple, managed platform for container orchestration that makes it easy to provision and deploy AI applications and systems at thousands of distributed environments, all from a single cloud-based console.

But deployment is just the first step in managing AI applications at the edge. Optimizing these applications is a continuous process that involves applying patches, deploying new applications and rebooting edge systems.

To make these workflows seamless in a managed environment, Fleet Command now offers advanced remote management, multi-instance GPU provisioning and additional integrations with tools from industry collaborators.

Advanced Remote Management 

IT administrators now can access systems and applications with sophisticated security features. Remote management on Fleet Command offers access controls and timed sessions, eliminating vulnerabilities that come with traditional VPN connections. Administrators can securely monitor activity and troubleshoot issues at remote edge locations from the comfort of their offices.

Edge environments are extremely dynamic — which means administrators responsible for edge AI deployments need to be highly nimble to keep up with rapid changes and ensure little deployment downtime. This makes remote management a critical feature for every edge AI deployment.

Check out a complete walkthrough of the new remote management features and how they can be used to help administrators maintain and optimize even the largest edge deployments.

Multi-Instance GPU Provisioning 

Multi-Instance GPU, or MIG, partitions an NVIDIA GPU into several independent instances. MIG is now available on Fleet Command, letting administrators easily assign applications to each instance from the Fleet Command user interface. By allowing organizations to run multiple AI applications on the same GPU, MIG lets organizations right-size their deployments and get the most out of their edge infrastructure.

Learn more about how administrators can use MIG in Fleet Command to better optimize edge resources to scale new workloads with ease.

Working Together to Expand AI

New Fleet Command collaborations are also helping enterprises create a seamless workflow, from development to deployment at the edge.

Domino Data Lab provides an enterprise MLOps platform that allows data scientists to collaboratively develop, deploy and monitor AI models at scale using their preferred tools, languages and infrastructure. The Domino platform’s integration with Fleet Command gives data science and IT teams a single system of record and consistent workflow with which to manage models deployed to edge locations.

Milestone Systems, a leading provider of video management systems and NVIDIA Metropolis elite partner, created AI Bridge, an application programming interface gateway that makes it easy to give AI applications access to consolidated video feeds from dozens of camera streams. Now integrated with Fleet Command, Milestone AI Bridge can be easily deployed to any edge location.

IronYun, an NVIDIA Metropolis elite partner and top-tier member of the NVIDIA Partner Network, with its Vaidio AI platform applies advanced AI, evolved over multiple generations, to security, safety and operational applications worldwide. Vaidio is an open platform that works with any IP camera and integrates out of the box with dozens of market-leading video management systems. Vaidio can be deployed on premises, in the cloud, at the edge and in hybrid environments. Vaidio scales from one to thousands of cameras. Fleet Command makes it easier to deploy Vaidio AI at the edge and simplifies management at scale.

With these new features and expanded collaborations, Fleet Command ensures that the day-to-day process of maintaining, monitoring and optimizing edge deployments is straightforward and painless.

Test Drive Fleet Command

To try these features on Fleet Command, check out NVIDIA LaunchPad for free.

LaunchPad provides immediate, short-term access to a Fleet Command instance to easily deploy and monitor real applications on real servers using hands-on labs that walk users through the entire process — from infrastructure provisioning and optimization to application deployment for use cases like deploying vision AI at the edge of a network.

The post Living on the Edge: New Features for NVIDIA Fleet Command Deliver All-in-One Edge AI Management, Maintenance for Enterprises appeared first on NVIDIA Blog.

Read More