DENZA Unwraps Smart Driving Options for N7 Model Lineup, Powered by NVIDIA DRIVE Orin

DENZA Unwraps Smart Driving Options for N7 Model Lineup, Powered by NVIDIA DRIVE Orin

DENZA, the luxury electric-vehicle brand and joint venture between BYD and Mercedes-Benz, is debuting new intelligent driving features for its entire N7 model lineup, powered by the NVIDIA DRIVE Orin system-on-a-chip (SoC).

The N7 series was introduced earlier this year as a family of spacious five-seater SUVs for commuters looking to sport a deluxe EV with advanced driving functionality.

All N7 models can be equipped with the NVIDIA DRIVE Orin SoC for high-performance compute to simultaneously run in-vehicle applications and deep neural networks for automated driving.

NVIDIA DRIVE Orin serves as the brain behind DENZA’s proprietary Commuter Smart Driving system, which offers an array of smart features, including:

  • Navigate on autopilot for high-speed, all-scenario assisted driving.
  • Intelligent speed-limit control and emergency lane-keeping aid, for safer commutes on urban roads and highways.
  • Enhanced automatic emergency braking and front cross-traffic alert for increased safety at intersections and on narrow streets.
  • Automated parking assist, which scouts for parking spots, identifying horizontal, vertical and diagonal spaces to ease the challenge of parking in crowded areas.

Next-Gen Car Configuration

In addition to adopting accelerated computing in the car, DENZA is one of the flagship automotive trailblazers using the NVIDIA Omniverse Cloud platform to build and deploy next-generation car configurators to deliver greater personalization options for the consumer’s vehicle-purchasing experience.

Learn more about the DENZA N7 3D configurator.

Read More

The Fastest Path: Healthcare Startup Uses AI to Analyze Cancer Cells in the Operating Room

The Fastest Path: Healthcare Startup Uses AI to Analyze Cancer Cells in the Operating Room

Medical-device company Invenio Imaging is developing technology that enables surgeons to evaluate tissue biopsies in the operating room, immediately after samples are collected — providing in just three minutes AI-accelerated insights that would otherwise take weeks to obtain from a pathology lab.

In a surgical biopsy, a medical professional removes samples of cells or tissue that pathologists analyze for diseases such as cancer. By delivering these capabilities through a compact, AI-powered imaging system within the treatment room, Invenio aims to support rapid clinical decision-making.

“This technology will help surgeons make intraoperative decisions when performing a biopsy or surgery,” said Chris Freudiger, chief technology officer of Silicon Valley-based Invenio. “They’ll be able to rapidly evaluate whether the tissue sample contains cancerous cells, decide whether they need to take another tissue sample and, with the AI models Invenio is developing, potentially make a molecular diagnosis for personalized medical treatment within minutes.”

Quicker diagnosis enables quicker treatment. It’s especially critical for aggressive types of cancer that could grow or spread significantly in the weeks it takes for biopsy results to return from a dedicated pathology lab.

Invenio is a member of NVIDIA Inception, a program that provides cutting-edge startups with technological support and AI platform guidance. The company accelerates AI training and inference using NVIDIA GPUs and software libraries.

Laser Focus on Cancer Care

The NIO Laser Imaging System accelerates the imaging of fresh tissue biopsies.

Invenio’s NIO Laser Imaging System is a digital pathology tool that accelerates the imaging of fresh tissue biopsies. It’s been used in thousands of procedures in the U.S. and Europe. In 2021, it received the CE Mark of regulatory approval in Europe.

The company plans to adopt the NVIDIA Jetson Orin series of edge AI modules for its next-generation imaging system, which will feature near real-time AI inference accelerated by the NVIDIA TensorRT SDK.

“We’re building a layer of AI models on top of our imaging capabilities to provide physicians with not just the diagnostic image but also an analysis of what they’re seeing,” Freudiger said. “With the AI performance provided by NVIDIA Jetson at the edge, they’ll be able to quickly determine what kinds of cancer cells are present in a biopsy image.”

Invenio uses a cluster of NVIDIA RTX A6000 GPUs to train neural networks with tens of millions of parameters on pathologist-annotated images. The models were developed using the TensorFlow deep learning framework and trained on images acquired with NIO imaging systems.

“The most powerful capability for us is the expanded VRAM on the RTX A6000 GPUs, which allows us to load large batches of images and capture the variability of features,” Freudiger said. “It makes a big difference for AI training.”

On the Path to Clinical Deployment

One of Invenio’s AI products, NIO Glioma Reveal, is approved for clinical use in Europe and available for research use in the U.S. to help identify areas of cancerous cells in brain tissue.

A team of Invenio’s collaborators from the University of Michigan, New York University, University of California San Francisco, the Medical University of Vienna and University Hospital of Cologne recently developed a deep learning model that can find biomarkers of cancerous tumors with 93% accuracy in 90 seconds.

With this ability to analyze different molecular subtypes of cancer within a tissue sample, doctors can predict how well a patient will respond to chemotherapy — or determine whether a tumor has been successfully removed during surgery.

Beyond its work on brain tissue analysis, Invenio this year announced a clinical research collaboration with Johnson & Johnson’s Lung Cancer Initiative to develop and validate an AI solution that can help evaluate lung biopsies. The AI model will help doctors rapidly determine whether collected tissue samples contain cancer.

Lung cancer is the world’s deadliest form of cancer, and in the U.S. alone, lung nodules are found in over 1.5 million patients each year. Once approved for clinical use, Invenio’s NIO Lung Cancer Reveal tool aims to shorten the time needed to analyze tissue biopsies for these patients.

As part of this initiative, Invenio will run a clinical study before submitting the NVIDIA Jetson-powered AI solution for FDA approval.

Subscribe to NVIDIA healthcare news.

Read More

NVIDIA Works With NTT DOCOMO to Launch World’s First GPU-Accelerated 5G Network

NVIDIA Works With NTT DOCOMO to Launch World’s First GPU-Accelerated 5G Network

As generative AI sweeps across corporate boardrooms around the world, global telecommunications companies are exploring how to cost-effectively deliver many of these new AI applications to the edge over 5G and upcoming 6G networks.

Telcos plan to deploy over 17 million 5G microcells and towers worldwide by 2025. Building, managing and optimizing this new infrastructure while maintaining quality-of-service delivery and maximizing the customer experience is the industry’s next big challenge.

Today, NTT DOCOMO announced it is deploying a GPU-accelerated wireless solution in its network in Japan. This makes it the first-ever telco in the world to deploy a GPU-accelerated commercial 5G network.

DOCOMO’s move aims to address the multibillion dollar problem of driving improvements in performance, total cost of ownership and energy efficiency while unlocking the flexibility, scalability and supply chain diversity promise of Open RAN.

The 5G Open RAN solution uses a high-performance 5G virtual radio access network (vRAN) from Fujitsu built on the NVIDIA Aerial vRAN stack and NVIDIA Converged Accelerators. This combination enables telcos to create a fully software- and cloud-defined network that can dynamically allocate resources using industry-standard equipment.

“Open RAN offers the opportunity to build next-generation 5G networks with unprecedented flexibility and scalability thanks to multivendor connections,” said Sadayuki Abeta, global head of Open RAN solutions at NTT DOCOMO. “We look forward to continuing working with NVIDIA on infrastructure solutions that meet those needs.”

The 5G Open RAN solution is the first 5G vRAN for telco commercial deployment using the NVIDIA Aerial platform, with a hardened, carrier-grade vRAN stack. The platform brings together the NVIDIA Aerial vRAN stack for 5G, AI frameworks, accelerated compute infrastructure and long-term software support and maintenance.

Cost and Energy-Efficiency Benefits of GPU Acceleration

Working with offerings from Fujitsu and Wind River, the new 5G solution uses the NVIDIA Aerial platform to lower costs and reduce power consumption. Compared to its existing 5G network deployments, DOCOMO says the solution reduces total costs by up to 30%, network design utilization by up to 50%, and power consumption at base stations by up to 50%.

“Delivering a 5G Open RAN network that meets stringent performance requirements of operators is a significant accomplishment,” said Masaki Taniguchi, senior vice president and head of the Mobile System Business Unit at Fujitsu Limited. “Using our Fujitsu vCU/vDU, in combination with NVIDIA Aerial platform, will help network operators to efficiently build high-performance 5G networks for consumers and businesses alike.”

NVIDIA’s contribution to the DOCOMO rollout is part of a growing portfolio of 5G solutions that are driving transformation in the telecommunications industry. Anchored on NVIDIA Aerial vRAN stack and NVIDIA Converged Accelerators — combined with NVIDIA BlueField data processing units (DPUs) and a suite of AI frameworks — NVIDIA provides a high-performance, software-defined, cloud-native, AI-enabled 5G for on-premises and telco operators’ RAN.

Fujitsu, NVIDIA and Wind River have been working under the OREX (5G Open RAN service brand), which was launched by DOCOMO in February 2021, to develop the Open RAN 5G vRAN. OREX has been deployed in Japan based on Fujitsu’s virtualized DU (vDU) and virtualized CU (vCU) and leverages commercial off-the-shelf servers, the Wind River cloud platform, Fujitsu’s 5G vRAN software and the NVIDIA Aerial vRAN stack and NVIDIA Converged Accelerators.

“Wind River is delighted to work with NTT DOCOMO, Fujitsu and NVIDIA towards a vision of improved efficiency of RAN deployments and operations,” said Paul Miller, chief technology officer at Wind River. “Wind River Studio provides a cloud-native, distributed cloud, automation, and analytics solution based on open source, so that operators can deploy and manage their 5G edge networks globally at high scale with faster innovation. This solution is proven in highly scaled production deployments today.”

OREX: Building Out From Japan and Beyond

DOCOMO and its partners in OREX are promoting a multivendor, Open RAN-compliant 5G vRAN to the global operator community. The commercial deployment in Japan is a first step in the vision of OREX, where members can commercially validate their solutions and then promote them to other operators globally.

NVIDIA is working with DOCOMO and other partners to support operators around the world to deploy high-performance, energy-efficient, software-defined, commercial 5G vRAN.

Read More

NVIDIA CEO and Founder Jensen Huang Returns to Denny’s Where NVIDIA Launched a Trillion-Dollar Vision

NVIDIA CEO and Founder Jensen Huang Returns to Denny’s Where NVIDIA Launched a Trillion-Dollar Vision

Talk about a Grand Slam.

Denny’s CEO Kelli Valade was joined Tuesday by NVIDIA CEO Jensen Huang Tuesday to unveil a plaque at the Silicon Valley Denny’s where NVIDIA’s founders hatched their idea for a chip that would enable realistic 3D graphics on personal computers.

“This is a place where we fuel ideas, your story is so inspiring it will continue to inspire people at Dennny’s,” Valade said, as she presented the plaque to Huang.

“Denny’s has taught me so many lessons,” Huang said.

Both CEOs got their start working in diners. Valade got her first job as a waitress at a diner when she was 16. Huang got his first job at Denny’s in Portland when he was 15.

“I was a dishwasher, I was a busboy, I waited tables,” Huang said. “No one can carry more coffee cups than I can.”

To fuel even more great ideas, Valade announced the Denny’s Trillion-Dollar Incubator Contest — offering $25,000 in seed money for the next $1 trillion idea.

https://apimages.photoshelter.com/galleries/C0000IW9ydIWWwzE/2023-09-30-Denny-s-Trillion-Dollar-IncubatorThe contest is open to anyone with a creative and innovative idea that could impact the world. The only catch? The idea must originate — like NVIDIA — in a Denny’s booth.

NVIDIA, the leading accelerated computing and AI company, got its start at the 24-hour diner chain known for favorites such as its signature Grand Slam combo.

In 1993, three friends — Huang, Chris Malachowsky and Curtis Priem — met at Denny’s to discuss creating a chip that would enable realistic 3D graphics on personal computers.

The Denny’s just off a busy thoroughfare in the heart of Silicon Valley was the perfect place to start a business, said Huang, who lived nearby at the time with his wife and kids.

“It had all the coffee you could drink and no one could chase you out,” Huang told Valade.

Tuesday’s event took place in a corner of the bustling restaurant — one of the most popular Denny’s locations in Northern California — as families, retirees, and workers coming off the night shift piled in for plates piled high with eggs and pancakes, sausage and bacon.

Huang was among them, starting the day with a meeting where he and his table polished off a Lumberjack Slam, Moons Over My Hammy, and a Super Bird sandwich — washed down with plenty of coffee. 

Huang, whose family immigrated to the United States from Taiwan when he was a child, told Valade he had his first hamburger at Dennys and his first milkshake.

“We make the best pancakes here,” Huang said.

“I love how you still say ‘we,’” Valade said.

“Made fresh every day,” Huang added with a grin.

Valade and Huang said Denny’s can be a great launching pad, not just for great ideas, but for great careers.

“Start your first job in the restaurant business,” Huang said. “It teaches you humility, it teaches you hard work, it teaches you hospitality.”

Valade agreed wholeheartedly, who, after talking shop with Huang, checked in with diners like Alfred, who was tucking into a stack of pancakes amid the morning rush.

For people across Silicon Valley, it’s the place to be. “I come here every day,” the retired roofer said.

Full contest details for the Denny’s Trillion Dollar Incubator Contest can be found at www.dennys.com/trilliondollarincubator. Contestants can submit their ideas online or at Denny’s restaurant by Nov. 21st at 8:59 am. The winner will be announced early next year.

 

IMAGE CREDIT: Don Feria/AP Images for Denny’s

 

Read More

AI Power Players: GeForce and NVIDIA RTX GPUs Supercharge Creativity, Gaming, Development, Productivity and More

AI Power Players: GeForce and NVIDIA RTX GPUs Supercharge Creativity, Gaming, Development, Productivity and More

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep-diving on new GeForce RTX 40 Series GPU features, technologies and resources and how they dramatically accelerate content creation.

Improving gaming, creating and everyday productivity, NVIDIA RTX graphics cards feature specialized Tensor Cores that deliver cutting-edge performance and transformative capabilities for AI.

An arsenal of 100 million AI-ready, RTX-powered Windows 11 PCs and workstations are primed to excel in AI workflows and help launch a new wave of training models and apps.

This week In the NVIDIA Studio, learn more about these AI power-players and read about Victor de Martrin, a self-taught 3D artist who shares the creative process behind his viral video Ascension, which features an assist from AI and uses a new visual effect process involving point clouds.

AI on the Prize

AI can elevate both work and play.

Content creators with NVIDIA Studio hardware can access over 100 AI-enabled and RTX-accelerated creative apps, including the Adobe Creative Cloud suite, Blender, Blackmagic Design’s DaVinci Resolve, OBS Studio and more.

This includes the NVIDIA Studio suite of AI tools and exclusive next-generation technology such as NVIDIA DLSS 3.5, featuring Ray Reconstruction and OptiX. AI image generation on a GeForce RTX 4090-powered PC can be completed at lightning speed — the same task would take 8x longer with an Apple M2 Ultra.

3D modelers can benefit immensely from AI, with dramatic improvements in real-time viewport rendering and upscaling with DLSS in NVIDIA Omniverse, Adobe Substance, Adobe Painter, Blender, D5 Render, Unreal Engine and more.

D5 Render full ray-tracing preview with Ray Reconstruction.

Gamers are primed for AI boosts with DLSS in over 300 games, including Alan Wake 2, coming soon, and Cyberpunk 2077: Phantom Liberty, available for download today.

Modders can breathe new life into classic games with NVIDIA RTX Remix’s revolutionary AI upscaling and texture enhancements. Game studios deploying the NVIDIA Avatar Cloud Engine (ACE) can build intelligent game characters, interactive avatars and digital humans in apps at scale — saving time and money.

Everyday tasks also get a boost. AI can draft emails, summarize content and upscale video to stunning 4K on YouTube and Netflix with RTX Video Super Resolution. Frequent video chat users can harness the power of NVIDIA Broadcast for AI-enhanced voice and video features. And livestreamers can take advantage of AI-powered features like virtual backgrounds, noise removal and eye contact, which moves the eyes of the speaker to simulate eye contact with the camera.

NVIDIA hardware powers AI natively and in the cloud — including for the world’s leading cloud AIs like ChatGPT, Midjourney, Microsoft 365 Copilot, Adobe Firefly and more.

Get further with AI — faster on RTX — and learn more to get started.

AI on Windows

NVIDIA AI, the world’s leading AI development platform, is supported by 100 million AI-ready RTX-powered Windows 11 PCs and workstations.

State-of-the-art technologies behind the Windows platform and NVIDIA’s AI hardware and software enable GPU-accelerated workflows for training and deploying AI models, exclusive tools, containers, software development kits and new open-source models optimized for RTX.

100 million RTX-powered Windows 11 PCs and workstations are already AI-ready.

With these integrations, Windows and NVIDIA are making it easier for developers to create the next generation of AI-powered Windows apps.

AI on Viral Content

3D content creator Martrin saw content creation as a means to gain financial, creative and geographical freedom — as well as a sense of gratification that he wasn’t getting in his day job in advertising.

“To gain recognition as an artist nowadays, unless you have connections, you’ve got to grind it out on social media,” admitted Martrin. As he began to showcase artwork to wider audiences, Martrin unexpectedly enjoyed the process of creating art aimed at garnering social media attention.

Best known for his viral video series Satisfying Marble Music — in which he replicates melodies by simulating a marble bouncing on xylophone notes — Martrin also takes a keen interest in the latest trends and advancements in the world of 3D to consistently meet his clients’ evolving demands.

In his recent project Ascension, he experimented with a point-cloud modeling technique for the first time, aiming to create a transition between reality and 3D. 

Martrin used several 3D apps to refine video footage and animate himself in 3D. The initial stages involved modeling and animating a portrait of himself, which he used Daz Studio and Blender to accomplish. His PC, equipped with two NVIDIA GeForce RTX 4090 GPUs, easily handled this task.

Animations come to life in Blender.

He then used the GPU-accelerated Marvelous Designer cloth simulation engine to craft and animate his clothing.

Realistic clothes generated in Marvelous Designer.

Martrin stressed the importance of GPU acceleration in his creative workflow. “When did I use it? The countless times I checked my live viewer and during the creation of every render,” said Martrin.

From there, Martrin tested point-cloud modeling techniques. He first 3D-scanned his room. Then, instead of using the room as conventional 3D mesh, he deployed it as a set of data points in space to create a unique visual style.

Background elements added in Cinema 4D.

He then used Cinema 4D and OctaneRender with RTX-accelerated ray tracing to render the animation.

Martrin completed the project by applying a GPU-accelerated composition and special effects in Adobe After Effects, achieving faster rendering with NVIDIA CUDA technology.

“No matter how tough it is at the start, keep grinding and challenging — odds are, success will eventually follow,” said Martrin.

3D content creator Victor de Martrin.

Check out Martrin’s viral content on TikTok.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter. 

Read More

Six Steps Toward AI Security

Six Steps Toward AI Security

In the wake of ChatGPT, every company is trying to figure out its AI strategy, work that quickly raises the question: What about security?

Some may feel overwhelmed at the prospect of securing new technology. The good news is policies and practices in place today provide excellent starting points.

Indeed, the way forward lies in extending the existing foundations of enterprise and cloud security. It’s a journey that can be summarized in six steps:

  • Expand analysis of the threats
  • Broaden response mechanisms
  • Secure the data supply chain
  • Use AI to scale efforts
  • Be transparent
  • Create continuous improvements
Chart on scaling AI security
AI security builds on protections enterprises already rely on.

Take in the Expanded Horizon

The first step is to get familiar with the new landscape.

Security now needs to cover the AI development lifecycle. This includes new attack surfaces like training data, models and the people and processes using them.

Extrapolate from the known types of threats to identify and anticipate emerging ones. For instance, an attacker might try to alter the behavior of an AI model by accessing data while it’s training the model on a cloud service.

The security researchers and red teams who probed for vulnerabilities in the past will be great resources again. They’ll need access to AI systems and data to identify and act on new threats as well as help build solid working relationships with data science staff.

Broaden Defenses

Once a picture of the threats is clear, define ways to defend against them.

Monitor AI model performance closely. Assume it will drift, opening new attack surfaces, just as it can be assumed that traditional security defenses will be breached.

Also build on the PSIRT (product security incident response team) practices that should already be in place.

For example, NVIDIA released product security policies that encompass its AI portfolio. Several organizations — including the Open Worldwide Application Security Project — have released AI-tailored implementations of key security elements such as the common vulnerability enumeration method used to identify traditional IT threats.

Adapt and apply to AI models and workflows traditional defenses like:

  • Keeping network control and data planes separate
  • Removing any unsafe or personal identifying data
  • Using zero-trust security and authentication
  • Defining appropriate event logs, alerts and tests
  • Setting flow controls where appropriate

Extend Existing Safeguards

Protect the datasets used to train AI models. They’re valuable and vulnerable.

Once again, enterprises can leverage existing practices. Create secure data supply chains, similar to those created to secure channels for software. It’s important to establish access control for training data, just like other internal data is secured.

Some gaps may need to be filled. Today, security specialists know how to use hash files of applications to ensure no one has altered their code. That process may be challenging to scale for petabyte-sized datasets used for AI training.

The good news is researchers see the need, and they’re working on tools to address it.

Scale Security With AI

AI is not only a new attack area to defend, it’s also a new and powerful security tool.

Machine learning models can detect subtle changes no human can see in mountains of network traffic. That makes AI an ideal technology to prevent many of the most widely used attacks, like identity theft, phishing, malware and ransomware.

NVIDIA Morpheus, a cybersecurity framework, can build AI applications that create, read and update digital fingerprints that scan for many kinds of threats. In addition, generative AI and Morpheus can enable new ways to detect spear phishing attempts.

Chart on AI security use cases
Machine learning is a powerful tool that spans many use cases in security.

Security Loves Clarity

Transparency is a key component of any security strategy. Let customers know about any new AI security policies and practices that have been put in place.

For example, NVIDIA publishes details about the AI models in NGC, its hub for accelerated software. Called model cards, they act like truth-in-lending statements, describing AIs, the data they were trained on and any constraints for their use.

NVIDIA uses an expanded set of fields in its model cards, so users are clear about the history and limits of a neural network before putting it into production. That helps advance security, establish trust and ensure models are robust.

Define Journeys, Not Destinations

These six steps are just the start of a journey. Processes and policies like these need to evolve.

The emerging practice of confidential computing, for instance, is extending security across cloud services where AI models are often trained and run in production.

The industry is already beginning to see basic versions of code scanners for AI models. They’re a sign of what’s to come. Teams need to keep an eye on the horizon for best practices and tools as they arrive.

Along the way, the community needs to share what it learns. An excellent example of that occurred at the recent Generative Red Team Challenge.

In the end, it’s about creating a collective defense. We’re all making this journey to AI security together, one step at a time.

Read More

NVIDIA Studio Lineup Adds RTX-Powered Microsoft Surface Laptop Studio 2

NVIDIA Studio Lineup Adds RTX-Powered Microsoft Surface Laptop Studio 2

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks and demonstrates how NVIDIA Studio technology improves creative workflows.

The NVIDIA Studio laptop lineup is expanding with the new Microsoft Surface Laptop Studio 2, powered by GeForce RTX 4060, GeForce RTX 4050 or NVIDIA RTX 2000 Ada Generation Laptop GPUs, providing powerful performance and versatility for creators.

The Microsoft Surface Laptop Studio 2.

Backed by the NVIDIA Studio platform, the Surface Laptop Studio 2, announced today, offers maximum stability with preinstalled Studio Drivers, plus exclusive tools to accelerate professional and creative workflows.

NVIDIA today also launched DLSS 3.5, adding Ray Reconstruction to the suite of AI-powered DLSS technologies. The latest feature puts a powerful new AI neural network at the fingertips of creators on RTX PCs, producing higher-quality, lifelike ray-traced images in real time before generating a full render.

Chaos Vantage is the first creative application to integrate Ray Reconstruction. For gamers, Ray Reconstruction is now available in Cyberpunk 2077 and is slated for the Phantom Liberty expansion on Sept. 26.

Blackmagic Design has adopted NVIDIA TensorRT acceleration in update 18.6 for its popular DaVinci Resolve software for video editing, color correction, visual effects, motion graphics and audio post-production. By integrating TensorRT, the software now runs AI tools like Magic Mask, Speed Warp and Super Scale over 50% faster than before. With this acceleration, AI runs up to 2.3x faster on GeForce RTX and NVIDIA RTX GPUs compared to Macs.

The September Studio Driver is now available for download — providing support for the Surface Laptop Studio 2, new app releases and more.

NVIDIA and GeForce RTX GPUs get NVIDIA Studio Drivers free of charge.

This week’s In the NVIDIA Studio installment features Studio Spotlight artist Gavin O’Donnell, who created a Wild West-inspired piece using a workflow streamlined with AI and RTX acceleration in Blender and Unreal Engine running on a GeForce RTX 3090 GPU.

Create Without Compromise

The Surface Laptop Studio 2, when configured with NVIDIA laptop GPUs, delivers up to 2x the graphics performance compared to the previous generation, and is NVIDIA Studio validated.

The versatile Microsoft Surface Laptop Studio 2.

In addition to NVIDIA graphics, the Surface Laptop Studio 2 comes with 13th Gen Intel Core processors, up to 64GB of RAM and a 2TB SSD. It features a bright, vibrant 14.4-inch PixelSense Flow touchscreen, with true-to-life color and up to 120Hz refresh rate, and now comes with Dolby Vision IQ and HDR to deliver sharper colors.

The system’s unique design adapts to fit any workflow. It instantly transitions from a pro-grade laptop to a perfectly angled display for entertainment to a portable creative canvas for drawing and sketching with the Surface Slim Pen 2.

NVIDIA Studio systems deliver upgraded performance thanks to dedicated ray tracing, AI and video encoding hardware. They also provide AI app acceleration, advanced rendering and Ray Reconstruction with NVIDIA DLSS, plus exclusive software like NVIDIA Omniverse, NVIDIA Broadcast, NVIDIA Canvas and RTX Video Super Resolution — helping creators go from concept to completion faster.

The Surface Laptop Studio 2 will be available beginning October 3.

An AI for RTX

NVIDIA GeForce and RTX GPUs feature powerful local accelerators that are critical for AI performance, supercharging creativity by unlocking AI capabilities on Windows 11 PCs — including the Surface Laptop Studio 2.​

AI-powered Ray Resstruction.

With the launch of DLSS 3.5 with Ray Reconstruction, an NVIDIA supercomputer-trained AI network replaces hand-tuned denoisers to generate higher-quality pixels between sampled rays.

Ray Reconstruction improves the real-time editing experience by sharpening images and reducing noise — even while panning around a scene. The benefits of Ray Reconstruction shine in the viewport, as rendering during camera movement is notoriously difficult.

Chaos Vantage DLSS 3.5 with Ray Reconstruction technology.

By adding DLSS 3.5 support, Chaos Vantage will enable creators to explore large scenes in a fully ray-traced environment at high frame rates with improved image quality. Ray Reconstruction joins other AI-accelerated features in Vantage, including the NVIDIA OptiX denoiser and Super Resolution.

DLSS 3.5 and AI-powered Ray Reconstruction today join performance-multiplying AI Frame Generation in Cyberpunk 2077 with the new 2.0 update. On Sept. 26, Cyberpunk 2077: Phantom Liberty will launch with full ray tracing and DLSS 3.5.

NVIDIA TensorRT — the high-performance inference optimizer that delivers low latency and high throughput for deep learning inference applications and features — has been added to DaVinci Resolve in version 18.6.

Performance testing conducted by NVIDIA in August 2023. NVIDIA Driver 536.75. Windows 11. Measures time to apply various AI effects in DaVinci Resolve 18.6: Magic Mask, Speed Warp, Super Res, Depth Map.

The added acceleration dramatically increases performance. AI effects now run on a GeForce RTX 4090 GPU up to 2.3x faster than on M2 Ultra, and 5.4x faster than on 7900 XTX.

Stay tuned for updates in the weeks ahead for more AI on RTX, including DLSS 3.5 support coming to Omniverse this October.

Welcome to the Wild, Wild West

Gavin O’Donnell — an Ireland-based senior environment concept artist at Disruptive Games — is no stranger to interactive entertainment.

He also does freelance work on the side including promo art, environment design and matte painting for an impressive client list, including Disney, Giant Animation, ImagineFX, Netflix and more.

O’Donnell’s series of Western-themed artwork — the Wild West Project — was directly inspired by the critically acclaimed classic open adventure game Red Dead Redemption 2.

Prospector, lawman or vigilante?

“I really enjoyed how immersive the storyline and the world in general was, so I wanted to create a scene that might exist in that fictional world,” said O’Donnell. Furthermore, it presented him an opportunity to practice new workflows within 3D apps Blender and Unreal Engine.

‘Wild West Project’ brings the American Frontier to life.

Workflows combining NVIDIA technologies including RTX GPU acceleration at the intersection of AI were of especially great interest — accelerated by his GeForce RTX 3090 laptop GPU.

In Blender — O’Donnell sampled AI-powered Blender Cycles RTX-accelerated OptiX ray tracing in the viewport for interactive, photoreal rendering for modeling and animation.

On a beautiful journey.

Meanwhile — in Unreal Engine — O’Donnell sampled NVIDIA DLSS to increase the interactivity of the viewport by using AI to upscale frames rendered at lower resolution while still retaining high-fidelity detail. On top of RTX-accelerated rendering for high-fidelity visualization of 3D designs, virtual production and game development, the artist could simply create better, more detailed artwork, faster and easier.

O’Donnell credits his success to a constant state of creative evaluation — ensuring everything from his content creation techniques, methods of gaining inspiration, and technological knowledge — enabling the highest quality artwork possible — all while maintaining resource and efficiency gains.

As such, O’Donnell recently upgraded to an NVIDIA Studio laptop equipped with a GeForce RTX 4090 GPU with spectacular results. His rendering speeds in Blender, already very fast, sped up 73%, a massive time savings for the artist.

Senior environment concept artist Gavin O’Donnell.

Check out O’Donnell’s portfolio on ArtStation.

And finally, don’t forget to enter the #StartToFinish community challenge! Show us a photo or video of how one of your art projects started — and then one of the final result — using the hashtag #StartToFinish and tagging @NVIDIAStudio for a chance to be featured! Submissions considered through Sept. 30.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Run AI on Your PC? GeForce Users Are Ahead of the Curve

Run AI on Your PC? GeForce Users Are Ahead of the Curve

Gone are the days when AI was the domain of sprawling data centers or elite researchers.

For GeForce RTX users, AI is now running on your PC. It’s personal, enhancing every keystroke, every frame and every moment.

Gamers are already enjoying the benefits of AI in over 300 RTX games. Meanwhile, content creators have access to over 100 RTX creative and design apps, with AI enhancing everything from video and photo editing to asset generation.

And for GeForce enthusiasts, it’s just the beginning. RTX is the platform for today and the accelerator that will power the AI of tomorrow.

How Did AI and Gaming Converge?

NVIDIA pioneered the integration of AI and gaming with DLSS, a technique that uses AI to generate pixels in video games automatically and which has increased frame rates by up to 4x.

And with the recent introduction of DLSS 3.5, NVIDIA has enhanced the visual quality in some of the world’s top titles, setting a new standard for visually richer and more immersive gameplay.

But NVIDIA’s AI integration doesn’t stop there. Tools like RTX Remix empower game modders to remaster classic content using high-quality textures and materials generated by AI.

With NVIDIA ACE for Games, AI-powered avatars come to life on the PC, marking a new era of immersive gaming.

How Are RTX and AI Powering Creators?

Creators use AI to imagine new concepts, automate tedious tasks and create stunning works of art. They rely on RTX because it accelerates top creator applications, including the world’s most popular photo editing, video editing, broadcast and 3D apps.

With over 100 RTX apps now AI-enabled, creators can get more done and deliver incredible results.

The performance metrics are staggering.

RTX GPUs boost AI image generation speeds in tools like Stable Diffusion by 4.5x compared to competing processors. Meanwhile, in 3D rendering, Blender experiences a speed increase of 5.4x.

Video editing in DaVinci Resolve powered by AI doubles its speed, and Adobe Photoshop’s photo editing tasks become 3x as swift.

NVIDIA RTX AI tech demonstrates a staggering 10x faster speeds in distinct workflows when juxtaposed against its competitors.

NVIDIA provides various AI tools, apps and software development kits designed specifically for creators. This includes exclusive offerings like NVIDIA Omniverse, OptiX Denoiser, NVIDIA Canvas, NVIDIA Broadcast and NVIDIA DLSS.

How Is AI Changing Our Digital Experience Beyond Chatbots?

Beyond gaming and content creation, RTX GPUs bring AI to all types of users.

Add Microsoft to the equation and 100 million RTX-powered Windows 11 PCs and workstations are already AI-ready.

The complementary technologies behind the Windows platform and NVIDIA’s dynamic AI hardware and software stack are the driving forces that power hundreds of Windows apps and games.

  • Gamers: RTX-accelerated AI has been adopted in more than 300 games, increasing frame rates and enhancing visual fidelity.
  • Creators: More than 100 AI-enabled creative applications benefit from RTX acceleration — including the top apps for image generation, video editing, photo editing and 3D. AI helps artists work faster, automate tedious tasks and  expand the boundaries of creative expression.
  • Video Streamers: RTX Video Super Resolution uses AI to increase the resolution and improve the quality of streamed video, elevating the home video experience.
  • Office Workers and Students: Teleconferencing and remote learning get an RTX boost with NVIDIA Broadcast. AI improves video and audio quality and adds unique effects to make virtual interactions smoother and collaboration more efficient.
  • Developers: Thanks to NVIDIA’s world-leading AI development platform and technology developed by Microsoft and NVIDIA called CUDA on Windows Subsystem for Linux, developers can now do early AI development and training from the comfort of Windows, and easily migrate to servers for large training runs.

What Are the Emerging AI Applications for RTX PCs?

Generative AI enables users to quickly generate new content based on a variety of inputs — text, images, sounds, animation, 3D models or other types of data — bringing easy-to-use AI to more PCs.

Large language models (LLMs) are at the heart of many of these use cases.

Perhaps the best known is ChatGPT, a chatbot that runs in the cloud and one of the fastest growing applications in history.

Many of these LLMs now run directly on PC, enabling new end-user applications like automatically drafting documents and emails, summarizing web content, extracting insights from spreadsheet data, planning travel, and powering general-purpose AI assistants.

LLMs are some of the most demanding PC workloads, requiring a powerful AI accelerator — like an RTX GPU.

What Powers the AI Revolution on Our Desktops (and Beyond)?

What’s fueling the PC AI revolution?

Three pillars: lightning-fast graphics processing from GPUs, AI capabilities integral to GeForce and the omnipresent cloud.

Gamers already know all about the parallel processing power of GPUs. But what role did the GPU play in enabling AI in the cloud?

NVIDIA GPUs have transformed cloud services. These advanced systems power everything from voice recognition to autonomous factory operations.

In 2016, NVIDIA hand-delivered to OpenAI the first NVIDIA DGX AI supercomputer — the engine behind the LLM breakthrough powering ChatGPT.

NVIDIA DGX supercomputers, packed with GPUs and used initially as an AI research instrument, are now running 24/7 at businesses worldwide to refine data and process AI. Half of all Fortune 100 companies have installed DGX AI supercomputers.

The cloud, in turn, provides more than just vast quantities of training data for advanced AI models running on these machines.

Why Choose Desktop AI?

But why run AI on your desktop when the cloud seems limitless?

GPU-equipped desktops — where the AI revolution began — are still where the action is.

  • Availability: Whether a gamer or a researcher, everyone needs tools — from games to sophisticated AI models used by wildlife researchers in the field — that can function even when offline.
  • Speed: Some applications need instantaneous results. Cloud latency doesn’t always cut it.
  • Data size: Uploading and downloading large datasets from the cloud can be inefficient and cumbersome.
  • Privacy: Whether you’re a Fortune 500 company or just editing family photos and videos, we all have data we want to keep close to home.

RTX GPUs are based on the same architecture that fuels NVIDIA’s cloud performance. They blend the benefits of running AI locally with access to tools and the performance only NVIDIA can deliver.

NPUs, often called inference accelerators, are now finding their way into modern CPUs, highlighting the growing understanding of AI’s critical role in every application.

While NPUs are designed to offload light AI tasks, NVIDIA’s GPUs stand unparalleled for demanding AI models with raw performance ranging from a 20x-100x increase.

What’s Next for AI in Our Everyday Lives?

AI isn’t just a trend — it will impact many aspects of our daily lives.

AI functionality will expand as research advances and user expectations will evolve. Keeping up will require GPUs — and a rich software stack built on top of them — that are up to the challenge.

NVIDIA is at the forefront of this transformative era, offering end-to-end optimized development solutions.

NVIDIA provides developers with tools to add more AI features to PCs, enhancing value for users, all powered by RTX.

From gaming innovations with RTX Remix to the NVIDIA NeMo LLM language model for assisting coders, the AI landscape on the PC is rich and expanding.

Whether it’s stunning new gaming content, AI avatars, incredible tools for creators or the next generation of digital assistants, the promise of AI-powered experiences will continuously redefine the standard of personal computing.

Learn more about GeForce’s AI capabilities.

Read More

Into the Omniverse: Blender 4.0 Alpha Release Sets Stage for New Era of OpenUSD Artistry

Into the Omniverse: Blender 4.0 Alpha Release Sets Stage for New Era of OpenUSD Artistry

Editor’s note: This post is part of Into the Omniverse, a series focused on how artists, developers and enterprises can transform their workflows using the latest advances in OpenUSD and NVIDIA Omniverse.

For seasoned 3D artists and budding digital creation enthusiasts alike, an alpha version of the popular 3D software Blender is elevating creative journeys.

With the update’s features for intricate shader network creation and enhanced asset-export capabilities, the development community using Blender and the Universal Scene Description framework, aka OpenUSD, is helping to evolve the 3D landscape.

NVIDIA engineers play a key role in enhancing the OpenUSD capabilities of Blender which also brings enhancements for use with NVIDIA Omniverse, a development platform for connecting and building OpenUSD-based tools and applications.

A Universal Upgrade for Blender Workflows

With Blender 4.0 Alpha, 3D creators across industries and enterprises can access optimized OpenUSD workflows for various use cases.

For example, Emily Boehmer, a design intern at BMW Group’s Technology Office in Munich, is using the combined power of Omniverse, Blender and Adobe Substance 3D Painter to create realistic, OpenUSD-based assets to train computer vision AI models.

Boehmer worked with her team to create assets for use with SORDI.ai, an AI dataset published by BMW Group that contains over 800,000 photorealistic images.

A clip of an industrial crate virtually “aging.”

USD helped optimize Boehmer’s workflow. “It’s great to see USD support for both Blender and Substance 3D Painter,” she said. “When I create 3D assets using USD, I can be confident that they’ll look and behave as I expect them to in the scenes that they’ll be placed in because I can add physical properties to them.”

Australian animator Marko Matosevic is also harnessing the combined power of Blender, Omniverse and USD in his 3D workflows.

Matosevic began creating tutorials for his YouTube channel, Markom3D, to help artists of all levels. He now shares his vast 3D knowledge with over 77,000 subscribers.

Most recently, Matosevic created a 3D spaceship in Blender that he later enhanced in Omniverse through virtual reality.

Individual creators aren’t the only ones seeing success with Blender and USD. Multimedia entertainment studio Moment Factory creates OpenUSD-based digital twins to simulate their immersive events — including live performances, multimedia shows and interactive installations — in Omniverse with USD before deploying them in the real world.

Moment Factory’s interactive installation at InfoComm 2023.

Team members can work in the digital twin at the same time, including designers using Blender to create and render eye-catching beauty shots to share their creative vision with customers.

See how Moment Factory uses Omniverse, Blender and USD to bring their immersive events to life in their recent livestream.

These 3D workflow enhancements are available to all. Blender users and USD creators, including Boehmer, showcased their unique 3D pipeline on this recent Omniverse community livestream:

New Features Deliver Elevated 3D Experience

The latest USD improvements in Blender are the result of collaboration among many contributors, including AMD, Apple, Unity and NVIDIA, enabled by the Blender Foundation.

For example, hair object support — which improves USD import and export capabilities for digital hair — was added by a Unity software engineer. And a new Python IO callback system — which lets technical artists use Python to access USD application programming interfaces — was developed by a software engineer at NVIDIA, with support from others at Apple and AMD.

NVIDIA engineers are continuing to work on other USD contributions to include in future Blender updates.

Coming soon, the Blender 4.0 Alpha 201.0 Omniverse Connector will offer new features for USD and Omniverse users, including:

  • Universal Material Mapper 2 add-on: This allows for more complex shader networks, or the blending of multiple textures and materials, to be round-tripped between Omniverse apps and Blender through USD.
  • Improved UsdPreviewSurface support and USDZ import/export capabilities: This enables creators to export 3D assets for viewing in AR and VR applications.
  • Generic attribute support: This allows geometry artists to generate vertex colors — red, green or blue values — or other per-vertex (3D point) values and import/export them between Blender and other 3D applications.

Learn more about the Blender updates by watching this tutorial:

Get Plugged Into the Omniverse 

Learn from industry experts on how OpenUSD is enabling custom 3D pipelines, easing 3D tool development and delivering interoperability between 3D applications in sessions from SIGGRAPH 2023, now available on demand.

Anyone can build their own Omniverse extension or Connector to enhance their 3D workflows and tools. Explore the Omniverse ecosystem’s growing catalog of connections, extensions, foundation applications and third-party tools.

Share your Blender and Omniverse work as part of the latest community challenge, #StartToFinish. Use the hashtag to submit a screenshot of a project featuring both its beginning and ending stages for a chance to be featured on the @NVIDIAStudio and @NVIDIAOmniverse social channels.

To learn more about how OpenUSD can improve 3D workflows, check out a new video series about the framework. For more resources on OpenUSD, explore the Alliance for OpenUSD forum or visit the AOUSD website.

Get started with NVIDIA Omniverse by downloading the standard license for free or learn how Omniverse Enterprise can connect your team

Developers can check out these Omniverse resources to begin building on the platform. 

Stay up to date on the platform by subscribing to the newsletter and following NVIDIA Omniverse on Instagram, LinkedIn, Medium, Threads and Twitter.

For more, check out our forums, Discord server, Twitch and YouTube channels.

Featured image courtesy of Alex Trevino.

Read More

NVIDIA CEO Jensen Huang to Headline AI Summit in Tel Aviv

NVIDIA CEO Jensen Huang to Headline AI Summit in Tel Aviv

NVIDIA founder and CEO Jensen Huang will highlight the newest in generative AI and cloud computing at the NVIDIA AI Summit in Tel Aviv from Oct. 15-16.

The two-day summit is set to attract more than 2,500 developers, researchers and decision-makers from across one of the world’s most vibrant technology hubs.

With over 6,000 startups, Israel consistently ranks among the world’s top countries for VC investments per capita. The 2023 Global Startup Ecosystem report places Tel Aviv among the top 5 cities globally for startups.

The summit features more than 60 live sessions led by experts from NVIDIA and the region’s tech leaders, who will dive deep into topics like accelerated computing, robotics, cybersecurity and climate science.

Attendees will be able to network and gain insights from some of NVIDIA’s foremost experts, including Kimberly Powell, vice president and general manager of healthcare; Deepu Talla, vice president and general manager of embedded and edge computing; Gilad Shainer, senior vice president of networking and HPC; and Gal Chechik, senior director and head of the Israel AI Research Center.

Key events and features of the summit include:

  • Livestream: The keynote by Huang will take place Monday, Oct. 16, at 10 a.m. Israel time (11 p.m. Pacific) and will be available for livestreaming, with on-demand access to follow.
  • Ecosystem exhibition: An exhibition space at the Summit will showcase NVIDIA’s tech demos, paired with contributions from partners and emerging startups from the NVIDIA Inception program.
  • Deep dive into AI: The first day is dedicated to intensive learning sessions hosted by the NVIDIA Deep Learning Institute. Workshops encompass topics like “Fundamentals of Deep Learning” and “Building AI-Based Cybersecurity Pipelines,” among a range of other topics. Edge AI & Robotics Developer Day activities will explore innovations in AI and the NVIDIA Jetson Orin platform.
  • Multitrack sessions: The second day will include multiple tracks, covering areas such as generative AI and LLMs, AI in healthcare, networking and developer tools and NVIDIA Omniverse.

Learn more at https://www.nvidia.com/en-il/ai-summit-israel/.

Featured image credit: Gady Munz via the PikiWiki – Israel free image collection project

Read More