NVIDIA Expands Isaac Software Access and Jetson Platform Availability, Accelerating Robotics From Cloud to Edge

NVIDIA Expands Isaac Software Access and Jetson Platform Availability, Accelerating Robotics From Cloud to Edge

NVIDIA announced today at GTC that Omniverse Cloud will be hosted on Microsoft Azure, increasing access to Isaac Sim, the company’s platform for developing and managing AI-based robots.

The company also said that a full lineup of Jetson Orin modules is now available, offering a performance leap for edge AI and robotics applications.

“The world’s largest industries make physical things, but they want to build them digitally,” said NVIDIA founder and CEO Jensen Huang during the GTC keynote. “Omniverse is a platform for industrial digitalization that bridges digital and physical.”

Isaac Sim on Omniverse Enterprise for Virtual Simulations

Building robots in the real world requires creating datasets from scratch, which is time consuming and expensive and slows deployments.

That’s why developers are turning to synthetic data generation (SDG), pretrained AI models, transfer learning and robotics simulation to drive down costs and accelerate deployment timelines.

The Omniverse Cloud platform-as-a-service, which runs on NVIDIA OVX servers, puts advanced capabilities into the hands of Azure developers everywhere. It enables enterprises to scale robotics simulation workloads, such as SDG, and provides continuous integration and continuous delivery for devops teams to work in a shared repository on code changes while working with Isaac Sim.

Isaac Sim is a robotics simulation application and SDG tool that drives photorealistic, physically accurate virtual environments. Isaac Sim, powered by the NVIDIA Omniverse platform, enables global teams to remotely collaborate to build, train, simulate, validate and deploy robots.

Making Isaac Sim accessible in the cloud allows teams to work together more effectively with access to the latest robotics tools and software development kits. Omniverse Cloud gives enterprises more options in the cloud with Azure, in addition to the existing cloud-based methods of using Isaac Sim for self-managed containers, or with using it on virtual workstations or fully managed services such as AWS RoboMaker.

And with access to Omniverse Replicator, an SDG engine in Isaac Sim, engineers can build production-quality synthetic datasets to train robust deep learning perception models.

Amazon uses Omniverse to automate, optimize and plan its autonomous warehouses with digital twin simulations before deployment into the real world. With Isaac Sim, Amazon Robotics is also improving the capabilities of Proteus, its latest autonomous mobile robot (AMR). This helps the online retail giant fulfill thousands of orders in a cost- and time-efficient manner.

Working with automation company idealworks, BMW Group uses Isaac Sim in Omniverse to generate synthetic data and run scenarios for testing and training AMRs and factory robots.

NVIDIA is developing across the AI tools spectrum — from computing in the cloud with simulation like Isaac Sim to at the edge with the Jetson platform — accelerating robotics adoption across industries.

Jetson Orin for Efficient, High-Performance Edge AI and Robotics 

NVIDIA Jetson Orin-based modules are now available in production to support a complete range of edge AI and robotics applications. This includes the Jetson Orin Nano — which provides up to 40 trillion operations per second (TOPS) of AI performance in the smallest Jetson module — up to the Jetson AGX Orin, delivering 275 TOPS for advanced autonomous machines.

The new Jetson Orin Nano Developer Kit delivers 80x the performance when compared with the previous-generation Jetson Nano, enabling developers to run advanced transformer and robotics models. And with 50x the performance per watt, developers getting started with the Jetson Orin Nano modules can build and deploy power-efficient, entry-level AI-powered robots, smart drones, intelligent vision systems and more.

Application-specific frameworks like NVIDIA Isaac ROS and DeepStream, which run on  the Jetson platform, are closely integrated with cloud-based frameworks like Isaac Sim on Omniverse and NVIDIA Metropolis. And using the latest NVIDIA TAO Toolkit for fine-tuning pretrained AI models from the NVIDIA NGC catalog reduces time to deployment for developers.

More than 1 million developers and over 6,000 customers have chosen the NVIDIA Jetson platform, including Amazon Web Services, Canon, Cisco, Hyundai Robotics, JD.com, John Deere, Komatsu, Medtronic, Meituan, Microsoft Azure, Teradyne and TK Elevator.

Companies adopting the new Orin-based modules include Hyundai Doosan Infracore, Robotis, Seyeon Tech, Skydio, Trimble, Verdant and Zipline.

More than 70 Jetson ecosystem partners are offering Orin-based solutions, with a wide range of support from hardware, AI software and application design services to sensors, connectivity and developer tools.

The full lineup of Jetson Orin-based production modules is now available. The Jetson Orin Nano Developer Kit will start shipping in April.

CTA: Learn more about NVIDIA Isaac Sim, Jetson Orin, Omniverse Enterprise and Metropolis.

Read More

AI Speeds Insurance Claims Estimates for Better Policyholder Experiences

AI Speeds Insurance Claims Estimates for Better Policyholder Experiences

CCC Intelligent Solutions (CCC) has become the first company in the auto insurance industry to deliver an AI-powered repair estimating solution, called CCC Estimate – STP, short for straight-through processing.

The Chicago-based auto-claims technology powerhouse uses AI, insurer-driven rules and CCC’s vast ecosystem to deliver repair estimates in seconds, instead of days. It’s a technological feat considering there are thousands of vehicle makes and models on the road, and countless repair permutations.

The company’s commitment to AI spans many years, with its first AI solutions hitting the market more than five years ago. Today, it’s working to bring AI and intelligent experiences to key facets of claims and mobility for its 30,000 customers, who process more than 16 million claims annually using CCC solutions.

“Our data scientists play a crucial role in creating new solutions and the ability to build models, experiment and easily integrate the model into our AI workflows is key,” said Reza Rooholamini, chief scientific officer at CCC.

CCC has four decades of expertise in automotive claims and collects millions of unstructured and structured automotive-claim data points every year. The combination of industry experience and raw data, however, is just the starting point for CCC’s efforts. The company runs a 100% cloud production environment, providing customers with a flexible platform for continuous innovation.

As a market leader, CCC regularly reports AI adoption among its customers to track progress. According to its 2023 AI Adoption report, the company reported that more than 14 million unique claims have been processed using CCC’s computer vision AI through 2022. The company also saw a 60% year-over-year increase in the application of advanced AI for claims processing.

And AI isn’t just being used to process more claims, it’s informing more decisions across the entire claims management experience. In fact, the number of claims processed with four or more of CCC’s AI applications has more than doubled, year-over-year.

CCC has built an end-to-end hybrid-cloud AI development and training pipeline to support its continuous innovation. This infrastructure uses over 150 NVIDIA A100 Tensor Core GPUs, including NVIDIA DGX systems on premises and additional resources within NVIDIA DGX Cloud.

The CCC development teams are using DGX Cloud to supplement on-prem capacity, support supercomputing demand spikes and accelerate AI model development overall.

“The AI pipeline we’ve built enables us to unleash all kinds of innovations,” said Neda Hantehzadeh, director of data science at CCC.

With 25-30% of its data scientists and engineering teams’ time dedicated to experimentation, coupled with massive datasets that are growing each day, CCC needed to enable a more scalable, multi-platform, hybrid multi-cloud for its training environment.

Using its AI pipeline, CCC launched CCC Estimate – STP, which can deliver a detailed line-level estimate of the collision repair cost based on insurer rules in seconds using AI and just a few pictures of vehicle damage taken from a smartphone. Traditional methods can take several days.

This saves time for adjusters, freeing them up for more complex work. This digitalized estimation process helps elevate the customer experience as well as lower processing costs and is currently being used by leading insurance companies across the U.S.

But the results are broader. Using the NVIDIA Base Command Platform integrated with their development pipeline for training job orchestration and data management, the CCC team realizes improved productivity. Data scientists can run experiments 2x faster, which can mean more learnings for more innovation and solution development.

“We run some experiments on premises on NVIDIA DGX systems, but we may have spikes where we want to add, for example, 10 million more data points and do another run,” Hantehzadeh said. “If we need additional capacity, we can switch to DGX Cloud. Base Command Platform makes this process seamless.”

CCC plans to continue taking its investment to the leading edge of AI development, injecting AI and STP into different channels and products across the property and casualty insurance economy.

Learn more about NVIDIA DGX Cloud and NVIDIA Base Command Platform.

Read More

Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI

Moving Pictures: NVIDIA, Getty Images Collaborate on Generative AI

As a sports commentator for a professional lacrosse team, Grant Farhall knows the value in having the right teammates.

As the chief product officer for Getty Images, a global visual-content creator and marketplace, he believes the collaboration between his company and NVIDIA is an excellent pairing for taking generative AI to the next level.

The companies aim to develop two generative AI models using NVIDIA Picasso, part of the new NVIDIA AI Foundations cloud services. Users could employ the models to create a custom image or video in seconds, simply by typing in a concept.

“With our high quality and often unique imagery and videos, this collaboration will give our customers the ability to create a greater variety of visuals than ever before,  helping creatives and non-creatives alike fuel visual storytelling,” Farhall said.

Getty Images is a unique partner, not only for its stunning images and video, but also its rich metadata, with appropriate rights. Its creative team and research bring a wealth of expertise that can deliver impactful outputs.

For artists, generative AI adds a new tool that expands their canvas. For content creators, it’s an opportunity to create a custom visual tailored to a brand or business they’re building.

“More often than not, it’s a visual that cuts through the noise of a busy world to capture your attention, and being able to stand out from the crowd is crucial for businesses of all shapes and sizes,” Farhall said.

Building Responsible AI

But, as in lacrosse, you need to play by the rules.

The models will be trained on Getty Images’ fully licensed content, and revenue generated from the models will provide royalties to content creators.

“Both companies want to develop these tools in a responsible way that returns benefits to creators and doesn’t pass risks on to customers, and this collaboration is testament to the fact that’s possible,” he said.

A Time-Tested Relationship

It’s not the first inning for this collaboration.

“We’ve been fostering and growing a relationship for some time — NVIDIA brings the tech expertise and talent, and we bring the high quality and unique content and marketplace,” said Farhall.

The technology, values and connections are catalysts for experiences that wow creators and users. It’s a feeling Farhall shares, sitting in front of his mic on a Saturday night.

“There’s an adrenaline rush when the live action of a game becomes your singular focus and you’re just in the moment,” he said.

And by training a custom model with NVIDIA Picasso, Getty Images and NVIDIA aim to help storytellers everywhere create more moments that perfectly capture their audiences’ attention.

To learn more about what NVIDIA is doing in generative AI and beyond, watch company founder and CEO Jensen Huang’s GTC keynote below.

Image at top courtesy Roberto Moiola/Sysaworld/Getty Images.

Read More

Mind the Gap: Large Language Models Get Smarter With Enterprise Data

Mind the Gap: Large Language Models Get Smarter With Enterprise Data

Large language models available today are incredibly knowledgeable, but act like time capsules — the information they capture is limited to the data available when they were first trained. If trained a year ago, for example, an LLM powering an enterprise’s AI chatbot won’t know about the latest products and services at the business.

With the NVIDIA NeMo service, part of the newly announced NVIDIA AI Foundations family of cloud services, enterprises can close the gap by augmenting their LLMs with proprietary data, enabling them to frequently update a model’s knowledge base without having to further train it — or start from scratch.

This new functionality in the NeMo service enables large language models to retrieve accurate information from proprietary data sources and generate conversational, human-like answers to user queries. With this capability, enterprises can use NeMo to customize large language models with regularly updated, domain-specific knowledge for their applications.

This can help enterprises keep up with a constantly changing landscape across inventory, services and more, unlocking capabilities such as highly accurate AI chatbots, enterprise search engines and market intelligence tools.

NeMo includes the ability to cite sources for the language model’s responses, increasing user trust in the output. Developers using NeMo can also set up guardrails to define the AI’s area of expertise, providing better control over the generated responses.

Quantiphi — an AI-first digital engineering solutions and platforms company and one of NVIDIA’s service delivery partners — is working with NeMo to build a modular generative AI solution called baioniq that will help enterprises build customized LLMs to boost worker productivity. Its developer teams are creating tools that let users search up-to-date information across unstructured text, images and tables in seconds.

Bringing Dark Data Into the Light

Analysts estimate that around two-thirds of enterprise data is untapped. This so-called dark data is unused partly because it’s difficult to glean meaningful insights from vast troves of information. Now, with NeMo, businesses can retrieve insights from this data using natural language queries.

NeMo can help enterprises build models that can learn from and react to an evolving knowledge base — independent of the dataset that the model was originally trained on. Rather than needing to retrain an LLM to account for new information, NeMo can tap enterprise data sources for up-to-date details. Additional information can be added to expand the model’s knowledge base without modifying its core capabilities of language processing and text generation.

Enterprises can also use NeMo to build guardrails so that generative AI applications don’t provide opinions on topics outside their defined area of expertise.

Enabling a New Wave of Generative AI Applications for Enterprises

By customizing an LLM with business data, enterprises can make their AI applications agile and responsive to new developments. 

  • Chatbots: Many enterprises already use AI chatbots to power basic customer interactions on their websites. With NeMo, companies could build virtual subject-matter experts specific to their domains.
  • Customer service: Companies could update NeMo models with details about their latest products, helping live service representatives more easily answer customer questions with precise, up-to-date information.
  • Enterprise search: Businesses have a wealth of knowledge across the organization, including technical documentation, company policies and IT support articles. Employees could query a NeMo-powered internal search engine to retrieve information faster and more easily.
  • Market intelligence: The financial industry collects insights about global markets, public companies and economic trends. By connecting an LLM to a regularly updated database, investors and other experts could quickly identify useful details from a large set of information, such as regulatory documents, recordings of earnings calls or financial statements.

Enterprises interested in adding generative AI capabilities to their applications can apply for early access to the NeMo service.

Watch NVIDIA founder and CEO Jensen Huang discuss NVIDIA AI Foundations in the keynote address at NVIDIA GTC, running online through Thursday, March 23:

Read More

Green Light: NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

Green Light: NVIDIA Grace CPU Paves Fast Lane to Energy-Efficient Computing for Every Data Center

The results are in, and they point to a new era in energy-efficient computing.

In tests of real workloads, the NVIDIA Grace CPU Superchip scored 2x performance gains over x86 processors at the same power envelope across major data center CPU applications. That opens up a whole new set of opportunities.

It means data centers can handle twice as much peak traffic. They can slash their power bills by as much as half. They can pack more punch into the confined spaces at the edge of their networks — or any combination of the above.

Energy Efficiency, a Data Center Priority

Data center managers need these options to thrive in today’s energy-efficient era.

Moore’s law is effectively dead. Physics no longer lets engineers pack more transistors in the same space at the same power.

That’s why new x86 CPUs typically offer gains over prior generations of less than 30%. It’s also why a growing number of data centers are power capped.

With the added threat of global warming, data centers don’t have the luxury of expanding their power, but they still need to respond to the growing demands for computing.

Wanted: Same Power, More Performance

Compute demand is growing 10% a year in the U.S., and will double in the eight years from 2022-2030, according to a McKinsey study.

“Pressure to make data centers sustainable is therefore high, and some regulators and governments are imposing sustainability standards on newly built data centers,” it said.

With the end of Moore’s law, the data center’s progress in computing efficiency has stalled, according to a survey that McKinsey cited (see chart below).

Power efficiency gains have stalled in data centers, McKinsey said.

In today’s environment, the 2x gains NVIDIA Grace offers are the eye-popping equivalent of a multi-generational leap. It meets the requirements of today’s data center executives.

Zac Smith — the head of edge infrastructure at Equinix, a global service provider that manages more than 240 data centers — articulated these needs in an article about energy-efficient computing.

“The performance you get for the carbon impact you have is what we need to drive toward,” he said.

“We have 10,000 customers counting on us for help with this journey. They demand more data and more intelligence, often with AI, and they want it in a sustainable way,” he added.

A Trio of CPU Innovations

The Grace CPU delivers that efficient performance thanks to three innovations.

It uses an ultra-fast fabric to connect 72 Arm Neoverse V2 cores in a single die that sports 3.2 terabytes per second in fabric bisection bandwidth, a standard measure of throughput. Then it connects two of those dies in a superchip  package with the NVIDIA NVLink-C2C interconnect, delivering 900 GB/s of bandwidth.

Finally, it’s the first data center CPU to use server-class LPDDR5X memory. That provides up to 50% more memory bandwidth at similar cost but one-eighth the power of typical server memory. And its compact size enables 2x the density of typical card-based memory designs.

The Grace CPU is simpler and more energy efficient than current x86 CPUs
Compared to current x86 CPUs, NVIDIA Grace is a simpler design that offers more bandwidth and uses less power.

The First Results Are In

NVIDIA engineers are running real data center workloads on Grace today.

They found that compared to the leading x86 CPUs in data centers using the same power footprint, Grace is:

  • 2.3x faster for microservices,
  • 2x faster in memory intensive data processing
  • and 1.9 x faster in computational fluid dynamics, used in many technical computing apps.

Data centers usually have to wait two or more CPU generations to get these benefits, summarized in the chart below.

Grace outperforms x86 CPUs
Net gains (in light green) are the product of server-to-server advances (in dark green) and additional Grace servers that fit in the same x86 power envelope (middle bar) thanks to the energy efficiency of Grace.

Even before these results on working CPUs, users responded to the innovations in Grace.

The Los Alamos National Laboratory announced in May it will use Grace in Venado, a 10 exaflop AI supercomputer that will advance the lab’s work in areas such as materials science and renewable energy. Meanwhile, data centers in Europe and Asia are evaluating Grace for their workloads.

NVIDIA Grace is sampling now with production in the second half of the year. ASUS, Atos, GIGABYTE, Hewlett Packard Enterprise, QCT, Supermicro, Wistron and ZT Systems are building servers that use it.

Go Deep on Sustainable Computing

To dive into the details, read this whitepaper on the Grace architecture.

Learn more about sustainable computing from this session at NVIDIA GTC (March 20-23, free with registration): Three Strategies to Maximize Your Organization’s Sustainability and Success in an End-to-End AI World.

Read a whitepaper about the NVIDIA BlueField DPU to find out how to build energy-efficient networks.

And watch NVIDIA founder and CEO Jensen Huang’s GTC keynote to get the big picture.

Read More

NVIDIA Announces Microsoft, Tencent, Baidu Adopting CV-CUDA for Computer Vision AI

NVIDIA Announces Microsoft, Tencent, Baidu Adopting CV-CUDA for Computer Vision AI

Microsoft, Tencent and Baidu are adopting NVIDIA CV-CUDA for computer vision AI.

NVIDIA CEO Jensen Huang highlighted work in content understanding, visual search and deep learning Tuesday as he announced the beta release for NVIDIA’s CV-CUDA — an open-source, GPU-accelerated library for computer vision at cloud scale.

“Eighty percent of internet traffic is video, user-generated video content is driving significant growth and consuming massive amounts of power,” said Huang in his keynote at NVIDIA’s GTC technology conference. “We should accelerate all video processing and reclaim the power.”

CV-CUDA promises to help companies across the world build and scale end-to-end, AI-based computer vision and image processing pipelines on GPUs.

Optimizing Internet-Scale Visual Computing With AI

The majority of internet traffic is video and image data, driving incredible scale in applications such as content creation, visual search and recommendation, and mapping.

These applications use a specialized, recurring set of computer vision and image-processing algorithms to process image and video data before and after they’re processed by neural networks.

Microsoft Bing’s Visual Search Engine uses AI Computer Vision
to search for images (dog food, for example) within images on the Internet.

While neural networks are normally GPU accelerated, the computer vision and image processing algorithms that support them are often CPU bottlenecks in today’s AI applications.

CV-CUDA helps process 4x as many streams on a single GPU by transitioning the pre- and post-processing steps from CPU to GPU. In effect, it processes the same workloads at a quarter of the cloud-computing cost.

The CV-CUDA library provides developers more than 30 high-performance computer vision algorithms with native Python APIs and zero-copy integration with the PyTorch, TensorFlow2, ONNX and TensorRT machine learning frameworks.

The result is higher throughput, reduced computing cost and a smaller carbon footprint for cloud AI businesses.

Global Adoption for Computer Vision AI

Adoption by industry leaders around the globe highlights the benefits and versatility of CV-CUDA for a growing number of large-scale visual applications. Companies with massive image processing workloads can save tens to hundreds of millions of dollars.

Microsoft is working to integrate CV-CUDA into Bing Visual Search, which lets users search the web using an image instead of text to find similar images, products and web pages.

In 2019, Microsoft shared at GTC how they’re using NVIDIA technologies to help bring speech recognition, intelligent answers, text to speech technology and object detection together seamlessly and in real time.

Tencent has deployed CV-CUDA to accelerate its ad creation and content understanding pipelines, which process more than 300,000 videos per day.

The Shenzhen-based multimedia conglomerate has achieved a 20% reduction in energy and cost for image processing over their previous GPU-optimized pipelines.

And Beijing-based search giant Baidu is integrating CV-CUDA into FastDeploy, one of the open-source deployment toolkits of the PaddlePaddle Deep Learning Framework, which enables seamless computer vision acceleration to developers in the open-source community.

From Content Creation to Automotive Use Cases

Applications for CV-CUDA are growing. More than 500 companies have reached out with over 100 use cases in just the first few months of the alpha release.

In content creation and e-commerce, images use pre- and post-processing operators to help recommender engines recognize, locate and curate content.

In mapping, video ingested from mapping survey vehicles requires preprocessing and post-processing operators to train neural networks in the cloud to identify infrastructure and road features.

In infrastructure applications for self-driving simulation and validation software, CV-CUDA enables GPU acceleration for algorithms that are already occurring in the vehicle, such as color conversion, distortion correction, convolution and bilateral filtering.

Looking to the future, generative AI is transforming the world of video content creation and curation, allowing creators to reach a global audience.

New York-based startup Runway has integrated CV-CUDA, alleviating a critical bottleneck in preprocessing high-resolution videos in their video object segmentation model.

Implementing CV-CUDA led to a 3.6x speedup, enabling Runway to optimize real-time, click-to-content responses across its suite of creation tools.

“For creators, every second it takes to bring an idea to life counts,” said Cristóbal Valenzuela, co-founder and CEO of Runway. “The difference CV-CUDA makes is incredibly meaningful for the millions of creators using our tools.”

To access CV-CUDA, visit the CV-CUDA GitHub.

Or learn more by checking out the GTC sessions featuring CV-CUDA. Registration is free.

Read More

NVIDIA CEO to Reveal What’s Next for AI at GTC

NVIDIA CEO to Reveal What’s Next for AI at GTC

The secret’s out. Thanks to ChatGPT, everyone knows about the power of modern AI.

To find out what’s coming next, tune in to NVIDIA founder and CEO Jensen Huang’s keynote address at NVIDIA GTC on Tuesday, March 21, at 8 a.m. Pacific.

Huang will share his vision for the future of AI and how NVIDIA is accelerating it with breakthrough technologies and solutions. There couldn’t be a better time to get ready for what’s to come.

NVIDIA is a pioneer and leader in AI thanks to its powerful graphics processing units that have enabled new computing models like accelerated computing.

NVIDIA GPUs sparked the modern AI revolution by making deep neural networks faster and more efficient.

Today, NVIDIA GPUs power AI applications in every industry, from computer vision to natural language processing, from robotics to healthcare, and from gaming to chatbots.

GTC, which runs online March 20-23, is the conference for AI and the metaverse. It features more than 650 sessions on deep learning, computer vision, natural language processing, robotics, healthcare, gaming and more.

Speakers from Adobe, Amazon, Autodesk, Deloitte, Ford Motor, Google, IBM, Jaguar Land Rover, Lenovo, Meta, Netflix, Nike, OpenAI, Pfizer, Pixar, Subaru and more will all discuss their latest work.

Don’t miss out on talks from leaders such aas Demis Hassabis of DeepMind, Valeri Taylor of Argonne Labs, Scott Belsky of Adobe, Paul Debevec of Netflix, Thomas Schulthess of ETH Zurich, and a special fireside chat between Huang and Ilya Sutskever, co-founder of OpenAI, the creator of ChatGPT.

You can watch the keynote live or on demand. Register for free at https://www.nvidia.com/en-us/gtc/.

You can also join the conversation on social media using #GTC23.

Read More

NVIDIA Canvas 1.4 Available With Panorama Beta This Week ‘In the NVIDIA Studio’

NVIDIA Canvas 1.4 Available With Panorama Beta This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.

An update is now available for NVIDIA Canvas, the free beta app that harnesses the power of AI to help artists quickly turn simple brushstrokes into realistic landscapes.

This version 1.4 update includes a new Panorama mode, which 3D artist Dan “Greenskull” Hammill explores this week In the NVIDIA Studio.

The #GameArtChallenge charges ahead with this sensationally scary The Last of Us-themed 3D animation by @Noggi29318543.

Share game-inspired art using the #GameArtChallenge hashtag through Sunday, April 30, for a chance to be featured on the @NVIDIAStudio or @NVIDIAOmniverse channels.

Panorama Comes to Canvas

NVIDIA Canvas 1.4 adds Panorama mode, allowing for the creation of 4K equirectangular landscapes for 3D workflows. Graphic designers will be able to apply Canvas AI-generated scenes to their workflow, allowing for quick and easy iterations.

Users can select between Standard and Panorama workspace each time they open or create a new file.

For 3D artist and AI aficionado Dan “Greenskull” Hammill, Canvas technology invokes an intentional change of tone.

“The therapeutic nature of painting a landscape asks me to slow things down and let my inner artist free,” said Greenskull. “The legendary Bob Ross is a clear inspiration for how I speak during my videos. I want the viewer to both be fascinated by the technology and relaxed by the content.”

For a recent piece, called “The Cove,” Greenskull took a few minutes to create his preferred landscape — an ocean view complete with hills, foliage, sand and darker skies for a cloudy day — with the few strokes of a digital pen in Canvas, all accelerated by his GeForce RTX 4090 GPU.

‘The Cove’ was created in NVIDIA Canvas and completed in mere minutes.

The artist refined his landscape in even more detail with an expanded selection of brush-size options included in the Canvas 1.4 release. Once satisfied with the background, Greenskull reviewed his creation. “I can look at the 3D view, review, see how it looks, and it’s looking pretty good, pretty cool,” he said.

 

Greenskull then uploaded his landscape into a video game within Unreal Engine 5 as the skybox or enclosed world. Now his Canvas texture makes up the background.

 

“This really does open up a lot of possibilities for game designers, especially indie developers who quickly want to create something and have it look genuinely unique and great,” Greenskull said.

“NVIDIA has powered both my casual and professional life for countless years. To me, NVIDIA is reliable, powerful and state of the art. If I’m going to be on top of my game, I should have the hardware that will keep up and push forward.” — Dan “Greenskull” Hammill

With his new virtual world complete, Greenskull prepared to create videos for his social media platforms.

“I hit record, run my DSLR through a capture card, record dialog through Adobe Audition, and grab a second screen capture with a separate PC,” explained Greenskull.

Greenskull then pieced everything together, syncing the primary video, secondary PC video captures and audio files. He reviewed the clips, made minor edits and exported final videos.

Using his favorite video editing app, Adobe Premiere Pro, Greenskull tapped his NVIDIA RTX 4090 GPU’s dual AV1 video encoders via the Voukoder plug-in, cutting export times in half with improved video quality.

Download the Canvas beta, free for NVIDIA and GeForce RTX GPU owners.

Check out Greenskull on TikTok.

Dan “Greenskull” Hammill.

Learn more about these latest technologies by joining us at the Game Developers Conference. And catch up on all the groundbreaking announcements in generative AI and the metaverse by watching the NVIDIA GTC keynote.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the Studio newsletter.

Read More

Game Like a PC: GeForce NOW Breaks Boundaries Transforming Macs Into Ultimate Gaming PCs

Game Like a PC: GeForce NOW Breaks Boundaries Transforming Macs Into Ultimate Gaming PCs

Disney Dreamlight Valley is streaming from Steam and Epic Games Store on GeForce NOW starting today.

It’s one of two new games this week that members can stream with beyond-fast performance using a GeForce NOW Ultimate membership. Game as if using a PC on any device — at up to 4K resolution and 120 frames per second — even on a Mac.

Game Different

Mac Gaming on GeForce NOW
I’m a Mac, and I’m now a gaming PC.

GeForce NOW gives members the unique ability to play over 1,500 games with the power of a gaming PC, on nearly any device.

The new Ultimate membership taps into next-generation NVIDIA SuperPODs that stream GeForce RTX 4080-class performance. With support for 4K resolution at up to 120 fps or high-definition gaming at 240 fps on both PCs and Macs, even Mac users can say they’re PC gamers.

For Mac users, GeForce NOW is an opportunity to finally play the most advanced games available on the computer they love, which is exciting.MacStories.net

Macs with the latest Apple silicon — M2 and M1 chips — run the GeForce NOW app natively, without the need to install or run Rosetta. GeForce NOW members on a Mac get the best of PC gaming, on the system they love, without ever leaving the Apple ecosystem. This results in incredible performance from popular PC-only games without downloads, updates or patches.

Any laptop can be a gaming laptop, even a MacBook.Laptop Mag

MacBook Pro 16-inch laptops with 3,456×2,234 ProMotion 120Hz refresh-rate displays enable gaming in 4K high dynamic range at up to 120 fps. With NVIDIA DLSS 3 technology, these Macs can even run graphically intense games like The Witcher 3 and Warhammer 40,000: Darktide at 4K 120 fps. MacBook Pro laptops with smaller displays and MacBook Airs with 2,560×1,664 displays transform into gaming PCs, running titles like Cyberpunk 2077 in 1440p HDR at liquid-smooth frame rates.

NVIDIA’s GeForce NOW Ultimate changes everything. Suddenly, the Mac became a brilliant gaming platform.Forbes

GeForce NOW opens a world of gaming possibilities on Mac desktops — like the Mac mini, Mac Studio and iMac. Connect an ultrawide monitor and take in all the HDR cinematic game play at up to 3,840×1,600 and 120 fps in PC games such as Destiny 2 and Far Cry 6. With Macs connected to a 240Hz monitor, GeForce NOW Ultimate members can stream with the lowest latency in the cloud, enabling gaming at 240 fps in Apex Legends, Tom Clancy’s Rainbow Six Siege and nearly a dozen other competitive titles.

And it’s not just new Macs that can join in PC gaming. Any Mac system introduced in 2009 or later is fully supported.

We’ve Got Games, Say Cheers!

Disney Dreamlight Valley
Oh boy! “Disney Dreamlight Valley” is streaming on GeForce NOW.

Help restore Disney magic to the Valley and go on an enchanting journey in Gameloft’s Disney Dreamlight Valley — a life-sim adventure game full of quests, exploration and beloved Disney and Pixar friends.

It’s one of two new games being added this week:

Before you start a magical weekend of gaming, we’ve got a question for you. Let us know your answer in the comments below or on Twitter and Facebook.

Read More

Peter Ma on How He’s Using AI to Found 8 Promising Signals for Alien Life

Peter Ma on How He’s Using AI to Found 8 Promising Signals for Alien Life

Peter Ma was bored in his high school computer science class. So he decided to teach himself something new: how to use artificial intelligence to find alien life.

That’s how he eventually became the lead author of a groundbreaking study published in Nature Astronomy.

The study reveals how Ma and his co-authors used AI to analyze a massive dataset of radio signals collected by the SETI Breakthrough Listen project.

They found eight signals that might just be technosignatures or signs of alien technology.

In this episode of the NVIDIA AI Podcast, host Noah Kravitz interviews Ma, who is now an undergraduate student at the University of Toronto.

Ma tells Kravitz how he stumbled upon this problem and how he developed an AI algorithm that outperformed traditional methods in the search for extraterrestrial intelligence.

You Might Also Like

Sequoia Capital’s Pat Grady and Sonya Huang on Generative AI
Pat Grady and Sonya Huang, partners at Sequoia Capital, to discuss their recent essay, “Generative AI: A Creative New World.” The authors delve into the potential of generative AI to enable new forms of creativity and expression, as well as the challenges and ethical considerations of this technology. They also offer insights into the future of generative AI.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art
Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including da Vinci’s Salvador Mundi, with AI’s help.

GANTheftAuto: Harrison Kinsley on AI-Generated Gaming Environments
Humans playing games against machines is nothing new, but now computers can develop games for people to play. Programming enthusiast and social media influencer Harrison Kinsley created GANTheftAuto, an AI-based neural network that generates a playable chunk of the classic video game Grand Theft Auto V.

Subscribe, Review and Follow NVIDIA AI on Twitter

If you enjoyed this episode, subscribe to the NVIDIA AI Podcast on your favorite podcast platform and leave a rating and review. Follow @NVIDIAAI on Twitter or email the AI Podcast team to get in touch.

 

Read More