NVIDIA and the Loss Prevention Research Council (LPRC) are collaborating with several AI companies to showcase a real-time solution for combating and preventing organized retail crime (ORC).
The integrated offering provides advance notifications of suspicious behavior inside and outside stores so that authorities can intervene early.
The LPRC includes asset-protection executives from more than 85 major retail chains, with hundreds of thousands of stores worldwide, as well as law enforcement, consumer packaged goods companies and technology solutions partners. It’s focused on collaborating with the retail industry to reduce shrink — the loss of products for reasons other than sales — and increase safety and security at stores and shopping malls.
Flash mobs and smash-and-grab thefts are a growing concern, costing retailers billions of dollars in lost revenue and causing safety concerns among customers and employees. Crime syndicates have committed brazen, large-scale thefts, often selling stolen merchandise on the black market.
A National Retail Federation survey found that shrink accounted for $112 billion in losses in 2022, with an estimated two-thirds due to theft.
Increasingly, this involves violence. According to the survey, 67% of respondents said they were seeing more violence and aggression associated with organized-crime theft than a year ago.
The AI-based solution, which helps retailers get a jump on often-evasive, fast-moving organized crime groups, uses technology from several leading AI firms that have built their high-performance AI applications on the NVIDIA Metropolis application framework and microservices.
The solution includes product recognition and tracking, as well as anomaly detection, from AiFi, vehicle license plate and model recognition from BriefCam, and physical security management from SureView to provide advance and real-time notifications to retailer command centers.
The three are among over 500 software companies and startups that have developed retail, safety and security AI applications on NVIDIA Metropolis software development kits for vision AI — and that have been certified as NVIDIA Metropolis partners.
“The proposed AI-based ORC solution combines LPRC’s deep expertise in loss prevention from over 23 years of collaboration with asset protection executives with NVIDIA’s deep AI expertise,” said Read Hayes, who leads the LPRC and is a University of Florida research scientist and criminologist. “We believe this type of cross-industry collaboration will help retailers fight back against organized retail crime.”
Developing Integrated AI for Securing Stores
AiFi, based in Silicon Valley, develops computer vision solutions, including autonomous retail capabilities built on the NVIDIA Metropolis application framework. Its solution detects anomalies in shopper behavior, tracks items removed from shelves and notifies retailers if shoppers bypass checkout lanes.
BriefCam, based in Newton, Mass., provides deep learning-based video analytics technology for insightful decision-making. Enabling the forensic search, alerting on and visualization of objects in video, the BriefCam Platform includes integrated license plate recognition and cross-camera object tracking, alongside other capabilities that support effective asset protection and real-time response to theft attempts.
SureView, based in Tampa, Fla., offers a software platform for managing multiple security systems with a single view. The company’s physical security management system receives signals from the AiFi and BriefCam applications, helping teams coordinate a quick and consistent response and providing notifications to store security operations and law enforcement based on the retailer’s business rules.
For more information about AI solutions for mitigating organized retail crime, connect with the NVIDIA team at NRF: Retail’s Big Show, the world’s largest retail expo, taking place Jan. 14-16 at the Javits Convention Center in New York.
Attend the Big Ideas session on Organized Retail Crime on Jan. 14 at 2 p.m. ET, moderated by the LPRC, to discover how Kroger and Jacksons Food are using AI in their stores to tackle crime.
The ORC solution will be showcased at NRF — visit NVIDIA experts in Lenovo’s booth (3665) and Dell’s booth (4957) to learn more about it from NVIDIA’s software partners.
Generative AI is transforming drug research and development, enabling new discoveries faster than ever — and Amgen, one of the world’s leading biotechnology companies, is tapping the technology to power its research.
Amgen will build AI models trained to analyze one of the world’s largest human datasets on an NVIDIA DGX SuperPOD, a full-stack data center platform, that will be installed at Amgen’s deCODE genetics’ headquarters in Reykjavik, Iceland. The system will be named Freyja in honor of the powerful, life-giving Norse goddess associated with the ability to predict the future.
Freyja will be used to build a human diversity atlas for drug target and disease-specific biomarker discovery, providing vital diagnostics for monitoring disease progression and regression. The system will also help develop AI-driven precision medicine models, potentially enabling individualized therapies for patients with serious diseases.
Amgen plans to integrate the DGX SuperPOD, which will feature 31 NVIDIA DGX H100 nodes totaling 248 H100 Tensor Core GPUs, to train state-of-the-art AI models in days rather than months, enabling researchers to more efficiently analyze and learn from data in their search for novel health and therapeutics insights.
“For more than a decade, Amgen has been preparing for this hinge moment we are seeing in the industry, powered by the union of technology and biotechnology,” said David M. Reese, executive vice president and chief technology officer at Amgen. “We look forward to combining the breadth and maturity of our world-class human data capabilities at Amgen with NVIDIA’s technologies.”
The goal of deCODE founder and CEO Kári Stefánsson in starting the company was to understand human disease by looking at the diversity of the human genome. He predicted in a recent Amgen podcast that within the next 10 years, doctors will routinely use genetics to explore uncommon diseases in patients.
“This SuperPOD has the potential to accelerate our research by training models more quickly and helping us generate questions we might not have otherwise thought to ask,” said Stefánsson.
Putting the Tech in Biotechnology
Since its founding in 1996, deCODE has curated more than 200 petabytes of de-identified human data from nearly 3 million individuals.
The company started by collecting de-identified data from Icelanders, who have a rich heritage in genealogies that stretch back for centuries. This population-scale data from research volunteers provides unique insights into human diversity as it applies to disease.
deCODE has also helped sequence more than half a million human genomes from volunteers in the UK Biobank.
But drawing insights from this much data requires powerful AI systems.
By integrating powerful new technology, Amgen has an opportunity to accelerate the discovery and development of life-changing medicines. In March 2023, NVIDIA announced that Amgen became one of the first companies to employ NVIDIA BioNeMo, which researchers have used to build generative AI models to accelerate drug discovery and development. Amgen researchers have also been accessing BioNeMo via NVIDIA DGX Cloud, an AI supercomputing service.
“Models trained in BioNeMo can advance drug discovery on multiple fronts,” said Marti Head, executive director of computational and data sciences at Amgen. “In addition to helping develop drugs that are more effective, they can also help avoid unwanted effects like immune responses, and new biologics can be made in volume.”
By adopting DGX SuperPOD, Amgen is poised to gain unprecedented data insights with the potential to change the pace and scope of drug discovery.
“The fusion of advanced AI, groundbreaking developments in biology and molecular engineering and vast quantities of human data are not just reshaping how we discover and develop new medicines — they’re redefining medicine,” Reese said.
In perhaps the healthcare industry’s most dramatic transformation since the advent of computing, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging and wearable devices.
NVIDIA has been preparing for this moment for over a decade, building deep domain expertise, creating the NVIDIA Clara healthcare-specific computing platform and expanding its work with a rich ecosystem of partners. Healthcare customers and partners already consume well over a billion dollars in NVIDIA GPU computing each year — directly and indirectly through cloud partners.
In the $250 billion field of drug discovery, these efforts are meeting an inflection point: R&D teams can now represent drugs inside a computer.
By harnessing emerging generative AI tools, drug discovery teams observe foundational building blocks of molecular sequence, structure, function and meaning — allowing them to generate or design novel molecules likely to possess desired properties. With these capabilities, researchers can curate a more precise field of drug candidates to investigate, reducing the need for expensive, time-consuming physical experiments.
Accelerating this shift is NVIDIA BioNeMo, a generative AI platform that provides services to develop, customize and deploy foundation models for drug discovery.
Used by pharmaceutical, techbio and software companies, BioNeMo offers a new class of computational methods for drug research and development, enabling scientists to integrate generative AI to reduce experiments and, in some cases, replace them altogether.
In addition to developing, optimizing and hosting AI models through BioNeMo, NVIDIA has boosted the computer-aided drug discovery ecosystem with investments in innovative techbio companies — such as biopharmaceutical company Recursion, which is offering one of its foundation models for BioNeMo users, and biotech company Terray Therapeutics, which is using BioNeMo for AI model development.
BioNeMo Brings Precision to AI-Accelerated Drug Discovery
BioNeMo features a growing collection of pretrained biomolecular AI models for protein structure prediction, protein sequence generation, molecular optimization, generative chemistry, docking prediction and more. It also enables computer-aided drug discovery companies to make their models available to a broad audience through easy-to-access APIs for inference and customization.
Drug discovery teams use BioNeMo to invent or customize generative AI models with proprietary data — and drug discovery software companies, techbios and large pharmas are integrating BioNeMo cloud APIs, which will be released in beta this month, into platforms that deliver computer-aided drug discovery workflows.
The cloud APIs will now include foundation models from three sources: models invented by NVIDIA, such as the MolMIM generative chemistry model for small molecule generation; open-source models pioneered by global research teams, curated and optimized by NVIDIA, such as the OpenFold protein prediction AI; and proprietary models developed by NVIDIA partners, such as Recursion’s Phenom-Beta for embedding cellular microscopy images.
MolMIM generates small molecules while giving users finer control over the AI generation process — identifying new molecules that possess desired properties and follow constraints specified by users. For example, researchers could direct the model to generate molecules that have similar structures and properties to a given reference molecule.
Phenomenal AI for Pharma: Recursion Brings Phenom-Beta Model to BioNeMo
Recursion is the first hosting partner offering an AI model through BioNeMo cloud APIs: Phenom-Beta, a vision transformer model that extracts biologically meaningful features from cellular microscopy images.
This capability can provide researchers with insights about cell function and help them learn how cells respond to drug candidates or genetic engineering.
Phenom-Beta performed well on image reconstruction tasks, a training metric to evaluate model performance. Read the NeurIPS workshop paper to learn more.
Phenom-Beta was trained on Recursion’s publicly available RxRx3 dataset of biological images using the company’s BioHive-1 supercomputer, based on the NVIDIA DGX SuperPOD reference architecture.
To further its foundation model development, Recursion is expanding its supercomputer with more than 500 NVIDIA H100 Tensor Core GPUs. This will boost its computational capacity by 4x to create what’s expected to be the most powerful supercomputer owned and operated by any biopharma company.
How Companies Are Adopting NVIDIA BioNeMo
A growing group of scientists, biotech and pharma companies, and AI software vendors are using NVIDIA BioNeMo to support biology, chemistry and genomics research.
Biotech leader Terray Therapeutics is integrating BioNeMo cloud APIs into its development of a generalized, multi-target structural binding model. The company also uses NVIDIA DGX Cloud to train chemistry foundation models to power generative AI for small molecule design.
Protein engineering and molecular design companies Innophore and Insilico Medicine are bringing BioNeMo into their computational drug discovery applications. Innophore is integrating BioNeMo cloud APIs into its Catalophore platform for protein design and drug discovery. And Insilico, a premier member of the NVIDIA Inception program for startups, has adopted BioNeMo in its generative AI pipeline for early drug discovery.
Biotech software company OneAngstrom and systems integrator Deloitte are using BioNeMo cloud APIs to build AI solutions for their clients.
OneAngstrom is integrating BioNeMo cloud APIs into its SAMSON platform for molecular design used by academics, biotechs and pharmas. Deloitte is transforming scientific research by integrating BioNeMo on NVIDIA DGX Cloud with the Quartz Atlas AI platform. This combination enables biopharma researchers with unparalleled data connectivity and cutting-edge generative AI, propelling them into a new era of accelerated drug discovery.
The AI revolution returned to where it started this week, putting powerful new tools into the hands of gamers and content creators.
Generative AI models that will bring lifelike characters to games and applications and new GPUs for gamers and creators were among the highlights of a news-packed address Monday ahead of this week’s CES trade show in Las Vegas.
“Today, NVIDIA is at the center of the latest technology transformation: generative AI,” said Jeff Fisher, senior vice president for GeForce at NVIDIA, who was joined by leaders across the company to introduce products and partnerships across gaming, content creation, and robotics.
A Launching Pad for Generative AI
As AI shifts into the mainstream, Fisher said NVIDIA’s RTX GPUs, with more than 100 million units shipped, are pivotal in the burgeoning field of generative AI, exemplified by innovations like ChatGPT and Stable Diffusion.
And with our new Chat with RTX playground, releasing later this month, enthusiasts can connect an RTX-accelerated LLM to their own data, from locally stored documents to YouTube videos, using retrieval-augmented generation, or RAG, a technique for enhancing the accuracy and reliability of generative AI models.
Fisher also introduced TensorRT acceleration for Stable Diffusion XL and SDXL Turbo in the popular Automatic1111 text-to-image app, providing up to a 60% boost in performance.
NVIDIA Avatar Cloud Engine (ACE) Microservices Debut With Generative AI Models for Digital Avatars
NVIDIA ACE is a technology platform that brings digital avatars to life with generative AI. ACE AI models are designed to run in the cloud or locally on the PC.
In an ACE demo featuring Convai’s new technologies, NVIDIA’s Senior Product Manager Seth Schneider showed how it works.
First, a player’s voice input is passed to NVIDIA’s automatic speech recognition model, which translates speech to text. Then, the text is put into an LLM to generate the character’s response.
After that, the text response is vocalized using a text-to-speech model, which is passed to an animation model to create a realistic lip sync. Finally, the dynamic character is rendered into the game scene.
At CES, NVIDIA is announcing ACE Production Microservices for NVIDIA Audio2Face and NVIDIA Riva Automatic Speech Recognition. Available now, each model can be incorporated by developers individually into their pipelines.
NVIDIA is also announcing game and interactive avatar developers are pioneering ways ACE and generative AI technologies can be used to transform interactions between players and non-playable characters in games and applications. Developers embracing ACE include Convai, Charisma.AI, Inworld, miHoYo, NetEase Games, Ourpalm, Tencent, Ubisoft and UneeQ.
Getty Images Releases Generative AI by iStock and AI Image Generation Tools Powered by NVIDIA Picasso
Generative AI empowers designers and marketers to create concept imagery, social media content and more. Today, iStock by Getty Images is releasing a genAI service built on NVIDIA Picasso, an AI foundry for visual design, Fisher announced.
The iStock service allows anyone to create 4K imagery from text using an AI model trained on Getty Images’ extensive catalog of licensed, commercially safe creative content. New editing application programming interfaces that give customers powerful control over their generated images are also coming soon.
The generative AI service is available today at istock.com, with advanced editing features releasing via API.
NVIDIA Introduces GeForce RTX 40 SUPER Series
Fisher announced a new series of GeForce RTX 40 SUPER GPUs with more gaming and generative AI performance.
Fisher said that the GeForce RTX 4080 SUPER can power fully ray-traced games at 4K. It’s 1.4x faster than the RTX 3080 Ti without frame gen in the most graphically intensive games. With 836 AI TOPS, NVIDIA DLSS Frame Generation delivers an extra performance boost, making the RTX 4080 SUPER twice as fast as an RTX 3080 Ti.
Creators can generate video with Stable Video Diffusion 1.5x faster and images with Stable Diffusion XL 1.7x faster. The RTX 4080 SUPER features more cores and faster memory, giving it a performance edge at a great new price of $999. It will be available starting Jan. 31.
Next up is the RTX 4070 Ti SUPER. NVIDIA has added more cores and increased the frame buffer to 16GB and the memory bus to 256 bits. It’s 1.6x faster than a 3070 Ti and 2.5x faster with DLSS 3, Fisher said. The RTX 4070 Ti SUPER will be available starting Jan. 24 for $799.
Fisher also introduced the RTX 4070 SUPER. NVIDIA has added 20% more cores, making it faster than the RTX 3090 while using a fraction of the power. And with DLSS 3, it’s 1.5x faster in the most demanding games. It will be available for $599 starting Jan. 17.
NVIDIA RTX Remix Open Beta Launches This Month
There are over 10 billion game mods downloaded each year. With RTX Remix, modders can remaster classic games with full ray tracing, DLSS, NVIDIA Reflex and generative AI texture tools that transform low-resolution textures into 4K, physically accurate materials. The RTX Remix app will be released in open beta on Jan. 22.
Check out this new Half-Life 2 RTX gameplay trailer:
Twitch and NVIDIA to Release Multi-Encode Livestreaming
Twitch is one of the most popular platforms for content creators, with over 7 million streamers going live each month to 35 million daily viewers. Fisher explained that these viewers are on all kinds of devices and internet services.
Yet many Twitch streamers are limited to broadcasting at a single resolution and quality level. As a result, they must broadcast at lower quality to reach more viewers.
To address this, Twitch, OBS and NVIDIA announced Enhanced Broadcasting, supported by all RTX GPUs. This new feature allows streamers to transmit up to three concurrent streams to Twitch at different resolutions and quality so each viewer gets the optimal experience.
Beta signups start today and will go live later this month. Twitch will also experiment with 4K and AV1 on the GeForce RTX 40 Series GPUs to deliver even better quality and higher resolution streaming.
‘New Wave’ of AI-Ready RTX Laptops
RTX is the fastest-growing laptop platform, having grown 5x in the last four years. Over 50 million devices are enjoyed by gamers and creators across the globe.
More’s coming. Fisher announced “a new wave” of RTX laptops launching from every major manufacturer. “Thanks to powerful RT and Tensor Cores, every RTX laptop is AI-ready for the best gaming and AI experiences,” Fisher said.
With an installed base of 100 million GPUs and 500 RTX games and apps, GeForce RTX is the world’s largest platform for gamers, creators and, now, generative AI.
Activision and Blizzard Games Embrace RTX
More than 500 games and apps now take advantage of NVIDIA RTX technology, NVIDIA’s Senior Consumer Marketing Manager Kristina Bartz said, including Alan Wake 2, which won three awards at this year’s Game Awards.
NVIDIA Consumer Marketing Manager Kristina Bartz spoke about how NVIDIA technologies are being integrated into popular games.
It’s a list that keeps growing with 14 new RTX titles announced at CES.
Horizon Forbidden West, the critically acclaimed sequel to Horizon Zero Dawn, will come to PC early this year with the Burning Shores expansion, accelerated by DLSS 3.
Pax Dei is a social sandbox massively multiplayer online game inspired by the legends of the medieval era. Developed by Mainframe Industries with veterans from CCP Games, Blizzard and Remedy Entertainment, Pax Dei will launch in early access on PC with AI-accelerated DLSS 3 this spring.
Last summer, Diablo IV launched with DLSS 3 and immediately became Blizzard’s fastest-selling game. RTX ray tracing will now be coming to Diablo IV in March.
More than 500 games and apps now take advantage of NVIDIA RTX technology, with more coming.
Day Passes and G-SYNC Technology Coming to GeForce NOW
NVIDIA’s partnership with Activision also extends to the cloud with GeForce NOW, Bartz said. In November, NVIDIA welcomed the first Activation and Blizzard game, Call of Duty: Modern Warfare 3. Diablo IV and Overwatch 2 are coming soon.
GeForce NOW will get Day Pass membership options starting in February. Priority and Ultimate Day Passes will give gamers a full day of gaming with the fastest access to servers, with all the same benefits as members, including NVIDIA DLSS 3.5 and NVIDIA Reflex for Ultimate Day Pass purchasers.
NVIDIA also announced Cloud G-SYNC technology is coming to GeForce NOW, which varies the display refresh rate to match the frame rate on G-SYNC monitors, giving members the smoothest, tear-free gaming experience from the cloud.
Generative AI Powers Smarter Robots With NVIDIA Isaac
NVIDIA Vice President of Robotics and Edge Computing Deepu Talla addressed the intersection of AI and robotics.
Closing out the special address, NVIDIA Vice President of Robotics and Edge Computing Deepu Talla shared how the infusion of generative AI into robotics is speeding up the ability to bring robots from proof of concept to real-world deployment.
Talla gave a peek into the growing use of generative AI in the NVIDIA robotics ecosystem, where robotics innovators like Boston Dynamics and Collaborative Robots are changing the landscape of human-robot interaction.
Amid explosive interest in generative AI, the auto industry is racing to embrace the power of AI across a range of critical activities, from vehicle design, engineering and manufacturing, to marketing and sales.
The adoption of generative AI — along with the growing importance of software-defined computing — will continue to transform the automotive market in 2024.
NVIDIA today announced that Li Auto, a pioneer in extended-range electric vehicles (EVs), has selected the NVIDIA DRIVE Thor centralized car computer to power its next-generation fleets. Also, EV makers GWM (Great Wall Motor), ZEEKR and Xiaomi have adopted the NVIDIA DRIVE Orin platform to power their intelligent automated-driving systems.
In addition, a powerful lineup of technology is on display from NVIDIA’s automotive partners on the CES trade show floor in Las Vegas.
Mercedes-Benz is kicking off CES with a press conference to announce a range of exciting software-driven features and the latest developments in the Mercedes-Benz MB.OS story, each one showcased in a range of cars, including the Concept CLA Class, which is using NVIDIA DRIVE Orin for the automated driving domain.
Mercedes-Benz is also using digital twins for production with help from NVIDIA Omniverse, a platform for developing applications to design, collaborate, plan and operate manufacturing and assembly facilities. (West Hall – 4941)
Luminar will host a fireside chat with NVIDIA on Jan. 9 at 2 p.m. PT to discuss the state of the art of sensor processing and ongoing collaborations between the companies. In addition, Luminar will showcase the work it’s doing with NVIDIA partners Volvo Cars, Polestar, Plus and Kodiak. (West Hall – 5917 and West Plaza – WP10)
Ansys is demonstrating how it leverages NVIDIA Omniverse to accelerate autonomous vehicle development. Ansys AVxcelerate Sensors will be accessible within NVIDIA DRIVE Sim. (West Hall – 6500)
Cerence is introducing CaLLM, an automotive-specific large language model that serves as the foundation for the company’s next-gen in-car computing platform, running on NVIDIA DRIVE. (West Hall – 6627)
Cipia is showcasing its embedded software version of Cabin Sense, which includes both driver and occupancy monitoring and is expected to go into serial production this year. NVIDIA DRIVE is the first platform on which Cabin Sense will run commercially. (North Hall – 11022)
Kodiak is exhibiting an autonomous truck, which relies on NVIDIA GPUs for high-performance compute to process the enormous quantities of data it collects from its cameras, radar and lidar sensors. (West Plaza – WP10, with Luminar)
Lenovo is displaying its vehicle computing roadmap, featuring new products based on NVIDIA DRIVE Thor, including: Lenovo XH1, a central compute unit for advanced driver-assistance systems and smart cockpit; Lenovo AH1, a level 2++ ADAS domain controller unit; and Lenovo AD1, a level 4 autonomous driving domain controller unit. (Estiatorio Milos, Venetian Hotel)
Pebble, a recreational vehicle startup, is presenting its flagship product Pebble Flow, the electric semi-autonomous travel trailer powered by NVIDIA DRIVE Orin, with production starting before the end of 2024. (West Hall – 7023)
Polestar is showcasing Polestar 3, which is powered by the NVIDIA DRIVE Orin central core computer. (West Hall – 5917 with Luminar and Central Plaza – CP1 with Google)
Zoox is showcasing the latest generation of its purpose-built robotaxi, which leverages NVIDIA technology, and is offering CES attendees the opportunity to join its early-bird waitlist for its autonomous ride-hailing service. (West Hall – 7228)
Explore to Win
Visit select NVIDIA partner booths for a chance to win GTC 2024 conference passes with hotel accommodations.
Event Lineup
Check out NVIDIA’s CES event page for a summary of all of the company’s automotive-related events. Learn about NVIDIA’s other announcements at CES by viewing the company’s special address on demand.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. We’re also deep diving on new GeForce RTX 40 Series GPUfeatures, technologies and resources, and how they dramatically accelerate content creation.
NVIDIA Studio is debuting at CES powerful new software and hardware upgrades to elevate content creation.
Generative AI by iStock from Getty Images is a new generative AI tool trained by NVIDIA Picasso that uses licensed artwork and the NVIDIA Edify architecture model to ensure that generated assets are commercially safe.
RTX Video HDR coming Jan. 24 transforms standard dynamic range video playing in internet browsers into stunning high dynamic range (HDR). By pairing it with RTX Video Super Resolution, NVIDIA RTX and GeForce RTX GPU owners can achieve dramatic video quality improvements on their HDR10 displays.
Twitch, OBS and NVIDIA are enhancing livestreaming technology with the new Twitch Enhanced Broadcasting beta, powered by GeForce RTX GPUs. Available later this month, the beta will enable users to stream multiple encodes concurrently, providing optimal viewing experiences for a broad range of device types and connections.
And NVIDIA RTX Remix — a free modding platform for quickly remastering classic games with RTX — releases in open beta later this month. It provides full ray tracing, NVIDIA DLSS, NVIDIA Reflex and generative AI texture tools.
This week’s In the NVIDIA Studio installment also features NVIDIA artists Ashlee Martino-Tarr, a 3D content specialist, and Daniela Flamm Jackson, a technical product marketer, who transform 2D illustrations into dynamic 3D scenes using AI and Adobe Firefly — powered by NVIDIA in the cloud and natively with GeForce RTX GPUs.
New Year, New NVIDIA Studio Laptops
The new NVIDIA Studio laptops and desktops level up power and efficiency with exclusive software like Studio Drivers preinstalled — enhancing creative features, reducing time-consuming tasks and speeding workflows.
The Acer Predator Triton Neo 16 features several 16-inch screen options with up to a 3.2K resolution at a 165Hz refresh rate and 16:10 aspect ratio. It provides DCI-P3 100% color gamut and support for NVIDIA Optimus and NVIDIA G-SYNC technology for sharp color hues and tear-free frames. It’s expected to be released in March.
The Acer Predator Triton Neo 16, with up to the GeForce RTX 4070 Laptop GPU.
The ASUS ROG Zephryus G14 features a Nebula Display with a OLED panel and a G-SYNC OLED display running at 240Hz. It’s expected to release on Feb. 6.
The ASUS ROG Zephryus G14 with up to the GeForce RTX 4070 Laptop GPU.
The XPS 16 is Dell’s most powerful laptop featuring a large 16.3” InfinityEdge display, available with a 4K+ OLED touch display, true-to-life color delivering up to 80W of sustained performance, all with tone-on-tone finishes for an elegant, minimalistic design. Stay tuned for an update on release timing.
Dell’s XPS 16 with up to the GeForce RTX 4070 Laptop GPU.
Lenovo’s Yoga Pro 9i sports a 16-inch 3.2K PureSight Pro display, delivering a grid of over 1,600 mini-LED dimming zones, expertly calibrated colors accurate to Delta E< 1 and up to 165Hz. With Microsoft’s Auto Color Management feature, its display toggles automatically between 100% P3, 100% sRGB and 100% Adobe RGB color to ensure the highest-quality color. It’s expected to be released in April.
Lenovo Yoga Pro 9i with up to the GeForce RTX 4070 Laptop GPU.
HP’s OMEN 14 Transcend features a 14-inch 4K OLED WQXGA screen, micro-edge, edge-to-edge glass and 100% DCI-P3 with a 240Hz refresh rate. NVIDIA DLSS 3 technology helps unlock more efficient content creation and gaming sessions using only one-third of the expected battery power. It’s targeting a Jan. 19 release.
HP’s OMEN 14 Transcend with up to GeForce RTX 4070 Laptop GPU.
Samsung’s Galaxy Book4 Ultra includes an upgraded Dynamic AMOLED 2X display for high contrast and vivid color, as well as a convenient touchscreen. Its Vision Booster feature uses an Intelligent Outdoor Algorithm to automatically enhance visibility and color reproduction in bright conditions.
Samsung’s Galaxy Book4 Ultra with up to the GeForce RTX 4070 Laptop GPU.
Check back for more information on the new line of Studio systems, including updates to release dates.
A SUPER Debut for New GeForce RTX 40 Series Graphics Cards
The GeForce RTX 4080 SUPER sports more CUDA cores than the GeForce RTX 4080 and includes the world’s fastest GDDR6X video memory at 23 Gbps. In 3D apps like Blender, it can run up to 70% faster than previous generations. In generative AI apps like Stable Diffusion XL or Stable Video Diffusion, it can produce 1,024×1,024 images 1.7x faster and video 1.5x faster. Or play fully ray-traced games, including Alan Wake 2, Cyberpunk 2077: Phantom Liberty and Portal with RTX, in stunning 4K. The RTX 4080 SUPER will be available Jan. 31 as a Founders Edition and as custom boards for partners starting at $999.
The GeForce RTX 4070 Ti SUPER is equipped with more CUDA cores than the RTX 4070, a frame buffer increased to 16GB, and a 256-bit bus. It’s suited for video editing and rendering large 3D scenes and runs up to 1.6x faster than the RTX 3070 Ti and 2.5x faster with DLSS 3 in the most graphics-intensive games. Gamers can max out high-refresh 1440p panels or even game at 4K. The RTX 4070 Ti SUPER will be available Jan. 24 from custom board partners in stock-clocked and factory-overclocked configurations starting at $799.
The GeForce RTX 4070 SUPER has 20% more CUDA cores than the GeForce RTX 4070 and is great for 1440p creating. With DLSS 3, it’s 1.5x faster than a GeForce RTX 3090 while using a fraction of the power.
Creative Vision Meets Reality With Getty Images and NVIDIA
Content creators using the new Generative AI by iStock from Getty Images tool powered by NVIDIA Picasso can now safely, affordably use AI-generated images with full protection.
Generative AI by iStock is trained on Getty Images’ vast creative library of high-quality licensed content, including millions of exclusive photos, illustrations and videos. Users can enter prompts to generate photo-quality images at up to 4K for social media promotion, digital advertisements and more.
Getty Images is also making advanced inpainting and outpainting features available via application programming interfaces. Developers can seamlessly integrate the new APIs with creative applications to add people and objects to images, replace specific elements and expand images to a wide range of aspect ratios.
Customers can use Generative AI by iStock online today. Advanced editing features are coming soon to the iStock website.
RTX Video HDR Brings AI Video Upgrades
RTX Video HDR brings a new AI-enhanced feature that instantly converts any standard dynamic range video playing in internet browsers into vibrant HDR.
HDR delivers stunning video quality but is not widely available because of effort and hardware limitations.
RTX Video HDR allows NVIDIA RTX and GeForce RTX GPU owners to maximize their HDR panel’s ability to display more vivid, dynamic colors, helping preserve intricate details that may be lost in standard dynamic range.
The feature requires an HDR10-compatible display or TV connected to a RTX-powered PC and works with Chromium-based browsers such as Google Chrome or Microsoft Edge.
RTX Video HDR and RTX Video Super Resolution can be used together to produce the clearest livestreamed video.
RTX Video HDR is coming to all NVIDIA RTX and GeForce RTX GPUs as part of a driver update later this month. Once the update goes through, navigate to the NVIDIA control panel and switch it on.
With Twitch Enhanced Broadcasting beta, GeForce RTX GPU owners will be able to broadcast up to three resolutions simultaneously at up to 1080p. In the coming months, Twitch plans to roll out support for up to five concurrent encodes to further optimize viewer experiences.
As part of the beta, Twitch will test higher input bit rates as well as new codecs, which are expected to further improve visual quality. The new codecs include the latest-generation AV1 for GeForce RTX 40 Series GPUs, which provides 40% more encoding efficiency than H.264, and HEVC for previous-generation GeForce GPUs.
To simplify the setup process, Enhanced Broadcasting will automatically configure all open broadcaster software encoder settings, including resolution, bit rate and encoding parameters.
Sign up for the Twitch Enhanced Broadcasting beta today.
A Righteous RTX Remix
Built on NVIDIA Omniverse, RTX Remix allows modders to easily capture game assets, automatically enhance materials with generative AI tools, reimagine assets via Omniverse-connected apps and Universal Scene Description (OpenUSD), and quickly create stunning RTX remasters of classic games with full ray tracing and NVIDIA DLSS technology.
NVIDIA artists and this week’s In the NVIDIA Studio features Ashlee Martino-Tarr and Daniela Flamm Jackson are passionate about illustration — whether in work or at play.
They used Adobe Firefly’s generative AI features, powered by NVIDIA GPUs in the cloud and accelerated with Tensor Cores in GeForce RTX GPUs, to animate a 2D illustration with special effects.
To begin, the pair separated the 2D image into multiple layers and expanded the canvas. Firefly’s Generative Expand feature automatically filled the added space with AI-generated content.
Next, the team separated select elements — starting with character — and used the AI Object Select feature to automatically mask the layer. The Generative Fill feature then created new content to fill in the background, saving even more time.
This process continued until all distinct layers were separated and imported into Adobe After Effects. Next, they used the Mercury 3D Engine on local RTX GPUs to accelerate playback, unlocking smoother movement in the viewport. Previews and adjustments like camera shake and depth of field were also GPU-accelerated.
Firefly’s Style Match feature then took the existing illustration and created new imagery in its likeness — in this case, a vibrant butterfly sporting similar colors and tones. The duo also used Adobe Illustrator’s Generative Recolor feature, which enables artists to explore a wide variety of colors and themes without having to manually recolor their work.
Martino-Tarr and Jackson then chose their preferred assets and animated them in Adobe After Effects. Firefly’s powerful AI effects helped speed or entirely eliminate tedious tasks such as patching holes, handpainting set extensions and caching animation playbacks.
A variety of high-quality images to choose from.
The artists concluded post-production work by putting the finishing touches on their AI animation in After Effects.
Firefly’s powerful AI capabilities were developed with the creative community in mind — guided by AI ethics principles of content and data transparency — to ensure morally responsible output. NVIDIA technology continues to power these features from the cloud for photographers, illustrators, designers, video editors, 3D artists and more.
NVIDIA artists Ashlee Martino-Tarr and Daniela Flamm Jackson.
Check out Martino-Tarr’s portfolio on ArtStation and Jackson’s on IMDb.
Twitch, OBS and NVIDIA are leveling up livestreaming technology with the new Twitch Enhanced Broadcasting beta, powered by GeForce RTX GPUs. Available in a few days, streamers will be able to stream multiple encodes concurrently, providing optimal viewing experiences for all viewers.
Twitch Enhanced Broadcasting
Today, many streamers must choose between higher resolution and reliable streaming. High-quality video provides more enjoyable viewing experiences but causes streams to buffer for viewers with low bandwidth or older viewing devices. Streaming lower-bitrate video allows more people to watch the content seamlessly, but introduces artifacts.
Twitch — the interactive livestreaming platform — provides server-side transcoding for top-performing channels, meaning it will create different versions of the same stream for different bandwidth levels, improving the viewing experience. But the audience of many channels are left with a single stream option.
Twitch, OBS and NVIDIA have collaborated on a new feature to address this — Twitch Enhanced Broadcasting, releasing in beta later this month. Using the high-quality dedicated encoder (NVENC) in modern GeForce RTX and GTX GPUs, streamers will be able to broadcast up to three resolutions simultaneously at up to 1080p.
In the coming months, Enhanced Broadcasting beta testers will be able to experiment with higher-input bit rates, up to 4K resolutions, up to 5 concurrent streams, as well as new codecs. The new codecs include the latest-generation AV1 for GeForce RTX 40 Series GPUs, which provides 40% more encoding efficiency than H.264, and HEVC for previous-generation GeForce GPUs.
To simplify set up, Enhanced Broadcasting will automatically configure all OBS encoder settings, including resolution, bit rate and encoding parameters. A server-side algorithm will return the best possible configuration for OBS Studio based on the streamer’s setup, taking the headaches out of tuning settings for the best viewer experiences.
Using the dedicated NVENC hardware encoder, streamers can achieve the highest quality video across streaming bitrates, with minimal impact to app and game performance.
Sign up for the Twitch Enhanced Broadcasting beta today at twitch.tv/broadcast. Twitch will enroll participants on a first-come, first-served basis, starting later this month. Once a creator has been enrolled in the beta, they’ll receive an email with additional instructions.
To further elevate livestreams, download the NVIDIA Broadcast app, free for RTX GPU owners and powered by dedicated AI Tensor Cores, to augment broadcast capabilities for microphones and cameras.
Getty Images, a global visual content creator and marketplace, today at CES released Generative AI by iStock, an affordable and commercially safe image generation service trained on the company’s creative library of licensed, proprietary data.
Built on NVIDIA Picasso, a foundry for custom AI models, Generative AI by iStock provides designers and businesses with a text-to-image generation tool to create ready-to-license visuals, with legal protection and usage rights for generated images included.
Alongside the release of the service on the iStock website, Getty Images is also making advanced inpainting and outpainting features available via application programming interfaces, launching on iStock.com and Gettyimages.com soon. Developers can seamlessly integrate the new APIs with creative applications to add people and objects to images, replace specific elements and expand images in a wide range of aspect ratios.
Create With Im-AI-gination
Generative AI by iStock is trained with NVIDIA Picasso on Getty Images’ vast creative library — including exclusive photos, illustrations and videos — providing users with a commercially safe way to generate visuals. Users can enter simple text prompts to generate photo-quality images at up to 4K resolution.
Inpainting and outpainting APIs, with Reflex feature coming soon.
New editing APIs give customers powerful control over their generated images.
The Inpainting feature allows users to mask a region of an image, then fill in the region with a person or object described via a text prompt.
Outpainting enables users to expand images to fit various aspect ratios, filling in new areas based on the context of the original image. This is a powerful tool to create assets with unique aspect ratios for advertising or social media promotion.
And coming soon, a Replace feature provides similar capabilities to Inpainting but with stricter adherence to the mask.
Transforming Visual Design
The NVIDIA Picasso foundry enables developers and service providers to seamlessly train, fine-tune, optimize and deploy generative AI models tailored to their visual design requirements. Developers can use their own AI models or train new ones using the NVIDIA Edify model architecture to generate images, videos, 3D assets, 360-degree high-dynamic-range imaging and physically based rendering materials from simple text prompts.
Using NVIDIA Picasso, Getty Images trained a bespoke Edify image generator based on its catalog of licensed images and videos to power the Generative AI by iStock service.
Customers can use Generative AI by iStock online today. Advanced editing features are now available via APIs and coming soon to the iStock website.
Whether building a super-capable truck or conjuring up a dream sports car, spending hours playing with online car configurators is easy.
With auto industry insiders predicting that most new vehicle purchases will move online by 2030, these configurators are more than just toys.
They’re crucial to the future of the world’s automakers — essential in showing off what their brand is all about, boosting average selling prices and helping customers select and personalize their vehicles.
It’s also a natural use case for the sophisticated simulation capabilities of NVIDIA Omniverse, a software platform for developing and deploying advanced 3D applications and pipelines based on OpenUSD. It provides the ability to instantly visualize changes to a car’s color or customize its interior with luxurious finishes.
Studies show that 80% of shoppers are drawn to brands that give them a personal touch while shopping.
Aiming to meet these customer demands, a burgeoning ecosystem of partners and customers is putting to work elements of Omniverse.
Key creative partners and developers like BITONE, Brickland, Configit, Katana Studio Ltd. (serving Craft Detroit), WPP and ZeroLight are pioneering Omniverse-powered configurators. And leading automakers such as Lotus are adopting these advanced solutions.
That’s because traditional auto configurators, often limited by pre-rendered images, experience difficulty achieving personalization and dynamic environment representation.
They use different kinds of data in various tools, such as static images of what users see on the website, lists of available options based on location, product codes and personal information.
These challenges extend from the consumer experience — often characterized by limited interactivity and realism — to back-end processes for original equipment manufacturers (OEMs) and agencies, where inflexibility and inefficiencies in updating configurators and repurposing assets are common.
Reconfiguring Configurators With NVIDIA Omniverse
Omniverse helps software developers and service providers streamline their work.
Service providers can now access the platform to craft state-of-the-art 3D experiences and showcase lifelike graphics and high-end, immersive environments with advanced lighting and textures.
And OEMs can benefit from a unified asset pipeline that simplifies the integration of design and engineering data for marketing purposes. Omniverse’s enhanced tools also allow them to quickly produce diverse marketing materials, boosting customer engagement through customized content.
Independent software vendors, or ISVs, can use the native OpenUSD platform as a foundation for creating scene construction tools — or to help develop tools for managing configuration variants.
With the NVIDIA Graphics Delivery Network (GDN) software development kit, high-quality, real-time NVIDIA RTX viewports can be embedded into web applications, ensuring seamless operation on nearly any device.
This, along with support for large-scale scenes and physically accurate graphics, allows developers to concentrate on enhancing the user experience without compromising quality on lower-spec machines.
Omniverse Cloud taps GDN, which uses NVIDIA’s global cloud-streaming infrastructure to deliver seamless access to high-fidelity 3D interactive experiences.
Configurators, when run on GDN, can be easily published at scale using the same GPU architecture on which they were developed and streamed to nearly any device.
All this means less redundancy in data prep, aggregated and accessible data, fewer manual pipeline updates and instant access for the entire intended audience.
Global Adoption by Innovators and Industry Leaders
Omniverse is powering a new era in automotive design and customer interaction, heralded by a vibrant ecosystem of partners and customers.
NVIDIA is bringing more games, membership options and innovative tech to its GeForce NOW cloud gaming service.
The next Activision and Blizzard titles to join the cloud, Diablo IV and Overwatch 2, will be coming soon. They’ll be joined by a host of top titles, including Capcom’s Exoprimal, HoYoverse’s Honkai: Star Rail and Mainframe Industries’ Pax Dei.
Available starting in February, new day passes for Ultimate and Priority memberships will offer full premium benefits one day at a time.
NVIDIA is also bringing G-SYNC technology to the cloud, raising cloud streaming performance while lowering latency and minimizing stuttering for the smoothest gameplay. Paired with new 60 and 120 fps streaming options for GFN Reflex mode, the two together make cloud gaming experiences nearly indistinguishable from local ones.
Plus, mobile gamers are getting a boost to 1440p resolution on Android phones. And Japan is the newest region to be operated by NVIDIA, which will soon enable gamers across the country to play their favorite PC games in the cloud with Ultimate performance.
Here Come the Games
The GeForce NOW catalog features many of the most popular PC games — over 1,800 titles from Steam, Xbox and supported PC Game Pass titles, Epic Games Store, Ubisoft, GOG.com and other digital stores. Backed by up to GeForce RTX 4080 GPU-class graphics, GeForce NOW is bringing even more top titles to the cloud from celebrated publishers.
The latest games from top developer Blizzard Entertainment — Diablo IV and Overwatch 2 — are coming soon to GeForce NOW. They join the recent release of Call of Duty, the first Activision game in the cloud, as part of a 10-year NVIDIA and Microsoft partnership.
Join the fight for Sanctuary.
Fight the forces of hell while discovering countless abilities to master, legendary loot to gather and nightmarish dungeons full of evil enemies to vanquish in Diablo IV. Experience the campaign solo or with friends in a shared open world as the dark, gripping story unfolds.
Team up and answer the call of heroes in “Overwatch 2.”
Team up and answer the call of heroes in Overwatch 2, a free-to-play shooter featuring 30+ epic heroes, each with game-changing abilities. Join the battle across dozens of futuristic maps inspired by real-world locations and master unique game modes in the always-on, ever-evolving, live game.
Members will soon be able to stream the Steam versions of Diablo IV and Overwatch 2 on nearly any device with the power of a GeForce RTX 4080 rig in the cloud, with support for the Battle.net launcher to follow.
The Astral Express is coming to GeForce NOW.
GeForce NOW also brings top role-playing games to the cloud. The immensely popular Honkai: Star Rail from HoYoverse will join Genshin Impact coming soon in the cloud. The space-fantasy RPG is set in a diverse universe filled with wonder, adventure and thrills, and expands the library of hit free-to-play titles for members. Plus, members can experience all the latest updates without worrying about download times.
Dinosaurs? Oh my.
Top publisher Capcom is working with NVIDIA to bring more of its hit titles to the cloud, including Exoprimal, an online, team-based action game that pits humanity’s cutting-edge exosuit technology against history’s most ferocious beasts: dinosaurs. Look forward to seeing it in the cloud on Jan. 18.
Ghosts do exist!
Mainframe Industries’ Pax Dei is a highly anticipated social sandbox massively multiplayer online game inspired by legends of the medieval era. It’s planned to release on GeForce NOW when it launches for PC.
Get ready to play these titles and more at high performance coming soon. Ultimate members will be able to stream at up to 4K resolution and 120 frames per second with support for NVIDIA DLSS and Reflex technology, and experience the action even on low-powered devices. Keep an eye out on GFN Thursdays for the latest on their release dates in the cloud.
Don’t Pass This Up
Day Passes, available in early February, will give gamers a fast pass to try out premium membership benefits before committing to one- or six-month memberships that offer better value. The passes provide access to all the same features as Priority and Ultimate members for 24 hours.
Day Pass users can experience RTX ON for supported games with Priority and Ultimate Day Passes. And Ultimate Day Pass users gain exclusive access to innovative technologies like NVIDIA DLSS 3.5, full ray tracing and NVIDIA Reflex.
Pssst, pass it on.
These new membership options let gamers freely choose when to tap into the cloud.
The Ultimate Day Pass will be available for $7.99 and the Priority Day Pass for $3.99. The 24 hours of continuous play will begin at purchase. Day Passes can be combined for continued access to GeForce NOW high-performance cloud streaming.
Let That Sync In
NVIDIA continues to push the boundaries for cloud gaming. The Ultimate membership tier introduced many cloud gaming firsts, from 240 fps to ultra-wide streaming, making gameplay with GeForce NOW — streaming from GeForce RTX 4080-powered servers — nearly identical to a local gaming experience.
Get in sync.
Coming soon, cloud G-SYNC technology will raise the bar even further, minimizing stutter and latency, with support for variable refresh rate monitors and fully optimized for G-SYNC-compatible monitors. With Cloud G-SYNC enabled, GeForce NOW will vary the display’s refresh rates to match the streaming rate, for the smoothest gameplay experience available from the cloud.
Ultimate members can also soon take advantage of expanded NVIDIA Reflex support in supported titles. Building off of 240fps 1080p streaming from last year, Ultimate members will soon be able to utilize Reflex in supported titles at up to 4K resolution and 60 or 120 fps streaming modes, for low-latency gaming on nearly any device. NVIDIA Reflex support is available in the top PC games on GeForce NOW, including Call of Duty: Modern Warfare III, Cyberpunk 2077, Diablo IV, Overwatch 2, The Witcher 3: Wild Hunt, Alan Wake 2 and more.
With both Cloud G-SYNC and Reflex, members will feel as if they’re connected directly to GeForce NOW’s RTX 4080 SuperPODs, making their visual experiences smoother, clearer and more immersive than ever.
Mobile Phones Are Now PC Gaming Rigs
Mobile gamers will soon have the option to set streaming resolution to 1440p on Android devices, providing richer graphics on larger screens. Members will be able to turn an Android device into a portable gaming rig with support for quad-high-definition resolution (2,560 x 1,440 pixels), as well as improved keyboard and mouse support.
This offers a glimpse into the future of game streaming, with external displays connected to a mobile device. Using a USB-C docking station, gamers can connect an Android phone to a 1080p or 1440p gaming monitor or TV, with a keyboard and mouse or gamepad.
Paired with a GeForce NOW Ultimate membership, Android phones become portable gaming rigs on which to play the latest triple-A PC games, such as Baldur’s Gate 3, The Finals, and Monster Hunter: World. Now anything, even a phone, can be a high-performance gaming rig.
GeForce NOW improves on-the-go streaming, one device at a time.
The above was on display this week at the CES trade show. The demo streams Cyberpunk 2077 and Alan Wake 2 from GeForce NOW servers in Los Angeles to a Samsung Galaxy S23 Ultra phone connected to a 1440p monitor in Las Vegas.
Clouds in Japan
The cloud’s drifting into Japan.
NVIDIA will begin operating GeForce NOW in Japan in the spring, operating alongside GeForce NOW Alliance partner KDDI.
Gamers in the region can look forward to Ultimate memberships for the first time, along with all the new games and advancements announced at CES. Visit the page to learn more and sign up for notifications.
With a steady drumbeat of quality games from top publishers, new membership options and the latest NVIDIA technology in the cloud, GeForce NOW is poised to bring another ultimate year of gaming to members.