Speeding adoption of enterprise AI and accelerated computing, Oracle CEO Safra Catz and NVIDIA founder and CEO Jensen Huang discussed their companies’ expanding collaboration in a fireside chat live streamed today from Oracle CloudWorld in Las Vegas.
Oracle and NVIDIA announced plans to bring NVIDIA’s full accelerated computing stack to Oracle Cloud Infrastructure (OCI). It includes NVIDIA AI Enterprise, NVIDIA RAPIDS for Apache Spark and NVIDIA Clara for healthcare.
In addition, OCI will deploy tens of thousands more NVIDIA GPUs to its cloud service, including A100 and upcoming H100 accelerators.
“I’m unbelievably excited to announce our renewed partnership and the expanded capabilities our cloud has,” said Catz to a live and online audience of several thousand customers and developers.
“We’re thrilled you’re bringing your AI solutions to OCI,” she told Huang.
The Power of Two
The combination of Oracle’s heritage in data and its powerful infrastructure with NVIDIA’s expertise in AI will give users traction facing tough challenges ahead, Huang said.
“Industries around the world need big benefits from our industry to find ways to do more without needing to spend more or consume more energy,” he said.
Panorama of the crowd at OracleWorld in Las Vegas.
AI and GPU-accelerated computing are delivering these benefits at a time when traditional methods of increasing performance are slowing, he added.
“Data that you harness to find patterns and relationships can automate the way you work and the products and services you deliver — the next ten years will be some of the most exciting times in our industry,” Huang said.
“I’m confident all workloads will be accelerated for better performance, to drive costs out and for energy efficiency,” he added.
The capability of today’s software and hardware, coming to the cloud, “is something we’ve dreamed about since our early days,” said Catz, who joined Oracle in 1999 and has been its CEO since 2014.
Benefits for Healthcare and Every Industry
“One of the most critical areas is saving lives,” she added, pointing to the two companies’ work in healthcare.
A revolution in digital biology is transforming healthcare from a science-driven industry to one powered by both science and engineering, And NVIDIA Clara provides a platform for that work, used by healthcare experts around the world, Huang said.
“We can now use AI to understand the language of proteins and chemicals, all the way to gene screening and quantum chemistry — amazing breakthroughs are happening now,” he said.
AI promises similar advances for every business. The automotive industry, for example, is becoming a tech industry as it discovers its smartphone moment, he said.
“We see this all over with big breakthroughs in natural language processing and large language models that can encode human knowledge to apply to all kinds of skills they were never trained to do,” he said.
Meta today announced its next-generation AI platform, Grand Teton, including NVIDIA’s collaboration on design.
Compared to the company’s previous generation Zion EX platform, the Grand Teton system packs in more memory, network bandwidth and compute capacity, said Alexis Bjorlin, vice president of Meta Infrastructure Hardware, at the 2022 OCP Global Summit, an Open Compute Project conference.
AI models are used extensively across Facebook for services such as news feed, content recommendations and hate-speech identification, among many other applications.
“We’re excited to showcase this newest family member here at the summit,” Bjorlin said, adding her thanks to NVIDIA for its deep collaboration on Grand Teton’s design and continued support of OCP.
Designed for Data Center Scale
Named after the 13,000-foot mountain that crowns one of Wyoming’s two national parks, Grand Teton uses NVIDIA H100 Tensor Core GPUs to train and run AI models that are rapidly growing in their size and capabilities, requiring greater compute.
The NVIDIA Hopper architecture, on which the H100 is based, includes a Transformer Engine to accelerate work on these neural networks, which are often called foundation models because they can address an expanding set of applications from natural language processing to healthcare, robotics and more.
The NVIDIA H100 is designed for performance as well as energy efficiency. H100-accelerated servers, when connected with NVIDIA networking across thousands of servers in hyperscale data centers, can be 300x more energy efficient than CPU-only servers.
“NVIDIA Hopper GPUs are built for solving the world’s tough challenges, delivering accelerated computing with greater energy efficiency and improved performance, while adding scale and lowering costs,” said Ian Buck, vice president of hyperscale and high performance computing at NVIDIA. “With Meta sharing the H100-powered Grand Teton platform, system builders around the world will soon have access to an open design for hyperscale data center compute infrastructure to supercharge AI across industries.”
Mountain of a Machine
Grand Teton sports 2x the network bandwidth and 4x the bandwidth between host processors and GPU accelerators compared to Meta’s prior Zion system, Meta said.
The added network bandwidth enables Meta to create larger clusters of systems for training AI models, Bjorlin said. It also packs more memory than Zion to store and run larger AI models.
Simplified Deployment, Increased Reliability
Packing all these capabilities into one integrated server “dramatically simplifies deployment of systems, allowing us to install and provision our fleet much more rapidly, and increase reliability,” said Bjorlin.
Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology improves creative workflows. In the coming weeks, we’ll be deep diving on new GeForce RTX 40 Series GPU features, technologies and resources, and how they dramatically accelerate content creation.
Adobe MAX is inspiring artists around the world to bring their ideas to life. The leading creative conference runs through Thursday, Oct. 20, in person and virtually.
Plus, artist Anna Natter transforms 2D photos into full-fidelity 3D assets using the power of AI and state-of-the-art photogrammetry technology this week In the NVIDIA Studio.
The new Adobe features, the latest NVIDIA Studio laptops and more are backed by the October NVIDIA Studio Driver available for download today.
Unleash MAXimum Performance
Press and content creators have been putting the new GeForce RTX 4090 GPU through a wide variety of creative workflows — here’s a sampling of their reviews:
The new GeForce RTX 4090 GPU.
“NVIDIA’s new flagship graphics card brings massive gains in rendering and GPU compute-accelerated content creation.” — Forbes
“GeForce RTX 4090 just puts on a clinic, by absolutely demolishing every other card here. In a lot of cases it’s almost cutting rendering times in half.” — Hardware Canucks
“If you care about rendering performance to the point that you always lock your eyes on a top-end target, then the RTX 4090 is going to prove to be an absolute screamer..” — Tech Gage
“The NVIDIA GeForce RTX 4090 is more powerful than we even thought possible.” — TechRadar
“As for the 3D performance of Blender and V-Ray, it delivers a nearly 2x performance increase, which makes it undoubtedly the most powerful weapon for content creators.” — XFastest
“NVIDIA has been providing Studio drivers for GeForce series graphics cards, they added dual hardware encoders and other powerful tools to help creators maximize their creativity. We can say it’s a new-gen GPU king suitable for top-notch gamers and creators.” — Techbang
Pick up the GeForce RTX 4090 GPU or a pre-built system today by heading to our Product Finder.
Enjoy MAXimum Creativity
Adobe is all in on the AI revolution, adopting AI-powered features across its lineup of Adobe Creative Cloud and Substance 3D apps. The updates simplify repetitive tasks and make advanced effects accessible.
Creators equipped with GeForce RTX GPUs, especially those part of the new RTX 40 Series, are primed to benefit from remarkable GPU acceleration of AI features in Adobe Creative Cloud.
Adobe Premiere Pro
Adobe Premiere Pro is getting RTX acceleration for AI features, resulting in significant performance boosts on AI effects. For example, the Unsharp Mask filter will see an increase of 4.5x, and the Posterize Time effect of over 2x compared to running them on a CPU (performance measured on RTX 3090 Ti and Intel i9 12900K).
Adobe Photoshop
The new beta Photo Restoration feature uses AI-powered neural filters to process imagery, add tone and minimize the effects of film grain. Photo Restoration can be applied to a single image or batches of imagery to quickly and conveniently improve the picture quality of an artist’s portfolio.
Photo Restoration adds tone and minimizes the effects of film grain in Adobe Photoshop.
Photoshop’s AI-powered Object Selection Tool allows artists to apply a selection to a particular object within an image. The user can manipulate the selected object, add filters and fine-tune details.
The AI-powered Object Selection Tool in Adobe Photoshop saves artists the trouble of tedious masking.
This saves the huge amount of time it takes artists to mask imagery — and in beta on the GeForce RTX 3060 Ti is 3x faster than the Intel UHD Graphics 700 and 4x faster than the Apple M1 Ultra.
Adobe Photoshop Lightroom Classic
The latest version of Adobe Photoshop Lightroom Classic makes it easy for users to create stunning final images with powerful new AI-powered masking tools.
With just a few clicks, these AI masks can identify and mask key elements within an image, including the main subject, sky and background, and can even select individuals within an image and apply masks to adjust specific areas, such as hair, face, eyes or lips.
Adobe Substance 3D
Substance 3D Modeler is now available in general release. Modeler can help create concept art — it’s perfect for sketching and prototyping, blocking out game levels, crafting detailed characters and props, or sculpting an entire scene in a single app. Its ability to switch between desktop and virtual reality is especially useful, depending on project needs and the artist’s preferred style of working.
The ability to switch between desktop and virtual reality is especially useful in Adobe Substance 3D Modeler.
Substance 3D Sampler added its photogrammetry feature, currently in private beta, which automatically converts photos of real-world objects into textured 3D models without the need to fiddle with sliders or tweak values. With a few clicks, the artist can now create 3D assets. This feature serves as a bridge for 2D artists looking to make the leap to 3D.
Adobe Creative Cloud and Substance 3D
These advancements join the existing lineup of GPU-accelerated and AI-enhanced Adobe apps, with features that continue to evolve and improve:
Adobe Camera RAW — AI-powered Select Objects and Select People masking tools
After Effects — Improved AI-powered Scene Edit Detection and H.264 rendering for faster exports with hardware-accelerated output
Illustrator — Substance 3D materials plugin for faster access to assets and direct export of Universal Scene Description (USD) files
Photoshop Elements — AI-powered Moving Elements add motion to a still image
Premiere Elements — AI-powered Artistic Effects transform clips with effects inspired by famous works of art or popular art styles
Premiere Pro — Adds Auto Color to apply intelligent color corrections to video clips such as exposure, white balance and contrast that enhance footage, GPU-accelerated Lumetri scopes and faster Motion Graphics Templates
Substance 3D Painter — SBSAR Exports for faster exports and custom textures that are easy to plug and play, plus new options to apply blending modes and opacity
Try these features on an NVIDIA Studio system equipped with a GeForce RTX GPU, and experience the ease and speed of RTX-accelerated creation.
October NVIDIA Studio Driver
This NVIDIA Studio Driver provides optimal support for the latest new creative applications including Topaz Sharpen AI and DXO Photo. In addition, this NVIDIA Studio Driver supports the new application updates announced at Adobe MAX including Premiere Pro, Photoshop, Photoshop Lightroom Classic and more.
Anna Natter, this week’s featured In the NVIDIA Studio artist, is a 3D artist at heart that likes to experiment with different mediums. She has a fascination with AI — both the technology it’s built on and its ever-expanding role in content creation.
“It’s an interesting debate where the ‘art’ starts when it comes to AI,” said Natter. “After almost a year of playing with AI, I’ve been working on developing my own style and figuring out how I can make it mine.”
AI meets RTX-accelerated Photoshop Neural Filters.
In the image above, Natter applied Photoshop Neural Filters, which were accelerated by her GeForce RTX 3090 GPU. “It’s always a good idea to use your own art for filters, so you can give everything a unique touch. So if you ask me if this is my art or not, it 100% is!” said the artist.
Natter has a strong passion for photogrammetry, she said, as virtually anything can be preserved in 3D. Photogrammetry features have the potential to save 3D artists countless hours. “I create hyperrealistic 3D models of real-life objects which I could not have done by hand,” she said. “Well, maybe I could’ve, but it would’ve taken forever.”
The artist even scanned her sweet pup Szikra to create a virtual 3D copy of her that will last forever.
Szikra is forever memorialized in 3D, thanks to the beta photogrammetry feature in Sampler.
To test the private beta photogrammetry feature in Substance 3D Sampler, Natter created this realistic tree model with a single series of images.
2D to 3D made easy with Substance 3D Sampler.
Natter captured a video of a tree in a nearby park in her home country of Germany. The artist then uploaded the footage to Adobe After Effects, exporting the frames into an image sequence. After Effects contains over 30 features accelerated by RTX GPUs, which improved Natter’s workflow.
Once she was happy with the 3D image quality, Natter dropped the model from Substance 3D Sampler into Substance 3D Stager. The artist then applied true-to-life materials and textures to the scene and color matched the details to the scanned model with the Stager color picker.
Selecting areas to apply textures in Adobe Substance 3D Stager.
Natter then lit the scene with a natural outdoor High Dynamic Range Image (HDRI), one of the pre-built environment-lighting options in 3D Stager. “What I really like about the Substance 3D suite is that it cuts the frustration out of my workflow, and I can just do my thing in a flow state, without interruption, because everything is compatible and works together so well,” she said.
Fine details like adding bugs from Adobe Stock helped Natter nail the scene.
The GeForce RTX 3090 GPU accelerated her workflow within 3D Stager, with RTX-accelerated and AI-powered denoising in the viewport unlocking interactivity and smooth movement. When it came time to render, RTX-accelerated ray tracing quickly delivered photorealistic 3D renders, up to 7x faster than with CPU alone.
“I’ve always had an NVIDIA GPU since I’ve been working in video editing for the past decade and wanted hardware that works best with my apps. The GeForce RTX 3090 has made my life so much easier, and everything gets done so much faster.” — 3D artist Anna Natter
Captions can be easily applied in Adobe Substance 3D Stager.
Natter can’t contain her excitement for the eventual general release of the Sampler photogrammetry feature. “As someone who has invested so much in 3D design, I literally can’t wait to see what people are going to create with this,” she said.
NVIDIA Studio wants to see your 2D to 3D progress!
Join the #From2Dto3D challenge this month for a chance to be featured on the NVIDIA Studio social media channels, like @JennaRambles, whose goldfish sketch was transformed into a beautiful 3D image.
Vehicle appraisals are getting souped up with a GPU-accelerated AI overhaul.
ProovStation, a four-year-old startup based in Lyon, France, is taking on the ambitious computer-vision quest of automating vehicle inspection and repair estimates, aiming AI-driven super-high-resolution stations at businesses worldwide.
It recently launched three of its state-of-the-art vehicle inspection scanners at French retail giant Carrefour’s Montesson, Vénissieux and Aix-en-Provence locations. The ProovStation drive-thru vehicle scanners are deployed at Carrefour parking lots for drivers to pull in to experience the free service.
The self-serve stations are designed for users to provide vehicle info and ride off with a value report and repair estimate in under two minutes. It also enables drivers to obtain a dealer offer to buy their car as quickly as within just seconds — which holds promise for consumers, as well as used car dealers and auctioneers.
Much is at play across cameras and sensors, high-fidelity graphics, multiple damage detection models, and models and analytics to turn damage detection into repair estimates and purchase offers.
“People often ask me how I’ve gotten so much AI going in this, and I tell them it’s because I work with NVIDIA Inception,” said Gabriel Tissandier, general manager and chief product officer at ProovStation.
Tapping into NVIDIA GPUs and NVIDIA Metropolis software development kits enables ProovStation to scan 5GB of image and sensor data per car and apply multiple vision AI detection models simultaneously, among other tasks.
The setup enables ProovStation to run inference for the quick vehicle analysis turnarounds on this groundbreaking industrial edge AI application.
Driving Advances: Bernard Groupe Dealerships
ProovStation is deploying its stations at a quick clip. That’s been possible because founder Gabriel Tissandier in the early stages connected with an ideal ally in Cedric Bernard, whose family’s Groupe Bernard car dealerships and services first invested in 2017 to boost its own operations.
Groupe Bernard has collected massive amounts of image data from its own businesses for ProovStation prototypes. Bernard left the family business to join Tissandier as the startup’s co-founder and CEO, and co-founder Anton Komyza joined them, and it’s been a wild ride of launches since.
ProovStation is a member of NVIDIA Inception, a program that accelerates cutting-edge startups with access to hardware and software platforms, technical training, as well as AI ecosystem support.
“People often ask me how I’ve gotten so much AI going in this, and I tell them it’s because I work with NVIDIA Inception,” said Tissandier, general manager and chief product officer at ProovStation.
Launching AI Stations Across Markets
ProovStation has deployed 35 scanning stations into operation so far, and it expects to double that number next year. It has launched its powerful edge AI-driven stations in Europe and the United States.
Early adopters include Groupe Bernard, U.K. vehicle sales site BCA Marketplace, OK Mobility car rentals in Spain and Germany’s Sixt car rentals. It also works with undisclosed U.S. automakers and a major online vehicle seller.
Car rental service Sixt has installed a station at Lyon Saint-Exupery Airport with the aim of making car pickups and returns easier.
“Sixt wants to really change the experience of renting a car,” said Tissandier.
Creating an ‘AI Super Factory’ for Damage Datasets
ProovStation has built up data science expertise and a dedicated team to handle its many specialized datasets for the difficult challenge of damage detection.
“To go from a damage review to a damage estimate can sometimes be really tricky,” said Tissandier.
ProovStation has a team of 10 experts in its AI Super Factory dedicated to labeling data with its own specialized software. They have processed more than 2 million images with labels so far, defining a taxonomy of more than 100 types of damages and more than 100 types of parts.
“We knew we needed this level of accuracy to make it reliable and efficient for businesses. Labeling images is super important, especially for us, so we invented some ways to label specific damages,” he said.
Tissandier said that the data science team members and others are brought up to speed on AI with courses from the NVIDIA Deep Learning Institute.
Delivering Data Collection With NVIDIA Industrial Edge AI
ProovStation scans a vehicle with 10 different cameras in its station and takes 300 images — or 5GB of data — for running on its detection models. NVIDIA GPUs enable ProovStation’s AI inference pipeline in 90 seconds to provide detection, assessment of damages, localization, measurements and estimates. Wheels are scanned with an electromagnetic frequency device from tire company Michelin for wear estimates. All of it runs on the NVIDIA edge AI system.
Using two NVIDIA GPUs in a station allows ProovStation to process all of this in high-resolution image analysis for improved accuracy. That data is also transferred to the cloud so ProovStation’s data science team can use it for further training.
Cameras, lighting and positioning are big issues. Detection models can be thrown off by things like glares on glass-shiney cars. ProovStation uses a defectometry model, which allows it to run detection while projecting lines onto vehicle surfaces, highlighting spots where problems appear in the lines.
It’s a challenging problem to solve that leads to business opportunities.
“All of the automotive industry is inspecting cars to provide services — to sell you new tires, to repair your car or windshield, it always starts with an inspection,” said Tissandier.
As a civil engineer, Scott Ashford used explosives to make the ground under Japan’s Sendai airport safer in an earthquake. Now, as the dean of the engineering college at Oregon State University, he’s at ground zero of another seismic event.
In its biggest fundraising celebration in nearly a decade, Oregon State announced plans today for a $200 million center where faculty and students can plug into resources that will include one of the world’s fastest university supercomputers.
The 150,000-square-foot center, due to open in 2025, will accelerate work at Oregon State’s top-ranked programs in agriculture, computer sciences, climate science, forestry, oceanography, robotics, water resources, materials sciences and more with the help of AI.
A Beacon in AI, Robotics
In honor of a $50 million gift to the OSU Foundation from NVIDIA’s founder and CEO and his wife — who earned their engineering degrees at OSU and met in one of its labs — it will be named the Jen-Hsun and Lori Huang Collaborative Innovation Complex (CIC).
“The CIC and new supercomputer will help Oregon State be recognized as one of the world’s leading universities for AI, robotics and simulation,” said Ashford, whose engineering college includes more than 10,000 of OSU’s 35,000 students.
“We discovered our love for computer science and engineering at OSU,” said Jen-Hsun and Lori Huang. “We hope this gift will help inspire future generations of students also to fall in love with technology and its capacity to change the world.
“AI is the most transformative technology of our time,” they added. “To harness this force, engineering students need access to a supercomputer, a time machine, to accelerate their research. This new AI supercomputer will enable OSU students and researchers to make very important advances in climate science, oceanography, materials science, robotics and other fields.”
A Hub for Students
With an extended-reality theater, robotics and drone playground and a do-it-yourself maker space, the new complex is expected to attract students from across the university. “It has the potential to transform not only the college of engineering, but the entire university, and have a positive economic and environmental impact on the state and the nation,” Ashford said.
The three-story facility will include a clean room, as well as labs for materials scientists, environmental researchers and more.
Artist’s rendering of the Jen-Hsun and Lori Huang Collaborative Innovation Complex.
Ashford expects that over the next decade the center will attract top researchers, as well as research projects potentially worth hundreds of millions of dollars.
“Our donors and university leaders are excited about investing in a collaborative, transdisciplinary approach to problem solving and discovery — it will revitalize our engineering triangle and be an amazing place to study and conduct research,” he said.
A Forest of Opportunities
He gave several examples of the center’s potential. Among them:
Environmental and electronics researchers may collaborate to design and deploy sensors and use AI to analyze their data, finding where in the ocean or forest hard-to-track endangered species are breeding so their habitats can be protected.
Students can use augmented reality to train in simulated clean rooms on techniques for making leading-edge chips. Federal and Oregon state officials aim to expand workforce development for the U.S. semiconductor industry, Ashford said.
Robotics researchers could create lifelike simulations of their drones and robots to accelerate training and testing. (Cassie, a biped robot designed at OSU, just made Guinness World Records for the fastest 100-meter dash by a bot.)
Students at OSU and its sister college in Germany, DHBW-Ravensburg, could use NVIDIA Omniverse — a platform for building and operating metaverse applications and connecting their 3D pipelines — to enhance design of their award-winning, autonomous, electric race cars.
Cassie broke a record for a robot running a 100-meter dash.
Building AI Models, Digital Twins
Such efforts will be accelerated with NVIDIA AI and Omniverse, software that can expand the building’s physical labs with simulations and digital twins so every student can have a virtual workbench.
OSU will get state-of-the-art NVIDIA DGX SuperPOD and OVX SuperPOD clusters once the complex’s data center is ready. With an eye on energy efficiency, water that cooled computer racks will then help heat more than 500,000 square feet of campus buildings.
The SuperPOD will likely include a mix of about 60 DGX and OVX systems — powered by next-generation CPUs, GPUs and networking — creating a system powerful enough to train the largest AI models and perform complex digital twin simulations. Ashford notes OSU won a project working with the U.S. Department of Energy because its existing computer center has a handful of DGX systems.
Advancing Diversity, Inclusion
At the Oct. 14 OSU Foundation event announcing the naming of the new complex, Oregon State officials thanked donors and kicked off a university-wide fundraising campaign. OSU has requested support from the state of Oregon for construction of the building and seeks additional philanthropic investments to expand its research and support its hiring and diversity goals.
OSU’s president, Jayathi Murthy, said the complex provides an opportunity to advance diversity, equity and inclusion in the university’s STEM education and research. OSU’s engineering college is already among the top-ranked U.S. schools for tenured or tenure-track engineering faculty who are women.
AI Universities Sprout
Oregon State also is among a small but growing set of universities accelerating their journeys in AI and high performance computing.
A recent whitepaper described efforts at University of Florida to spread AI across its curriculum as part of a partnership with NVIDIA that enabled it to install HiPerGator, a DGX SuperPOD based on NVIDIA DGX A100 systems with NVIDIA A100 Tensor Core GPUs.
Following Florida’s example, Southern Methodist University announced last fall its plans to make the Dallas area a hub of AI development around its new DGX SuperPOD.
“We’re seeing a lot of interest in the idea of AI universities from Asia, Europe and across the U.S.,” said Cheryl Martin, who leads NVIDIA’s efforts in higher education research.
One of OSU’s autonomous race cars rounds the track.
When it comes to reimagining the next generation of automotive, NIO is thinking outside the car.
This month, the China-based electric vehicle maker introduced its lineup to four new countries in Europe — Denmark, Germany, the Netherlands and Sweden — along with an innovative subscription-based ownership model. The countries join NIO’s customer base in China and Norway.
The models launching in the European market are all built on the NIO Adam supercomputer, which uses four NVIDIA DRIVE Orin systems-on-a-chip to deliver software-defined AI features.
These intelligent capabilities, which will gradually enable automated driving on expressways and urban areas, as well as autonomous parking and battery swap, are just the start of NIO’s fresh take on the vehicle ownership experience.
As the automaker expands its footprint, it is emphasizing membership rather than pure ownership. NIO vehicles are available via flexible subscription models, and customers can access club spaces, called NIO Houses, that offer a wide array of amenities.
International Supermodels
NIO’s lineup sports a premium model for every type of driver.
The flagship ET7 sedan boasts a spacious interior, with more than 620 miles of battery range and an impressive 0-to-60 miles per hour in under four seconds. For the mid-size segment, the ET5 is an EV that’s as agile as it is comfortable, borrowing the same speed and immersive interior as its predecessor in a more compact package.
Courtesy of NIO
Finally, the ES7 — renamed the EL7 for the European market — is an electric SUV for rugged and urban drivers alike. The intelligent EV sports 10 driving modes, including a camping mode for off-road adventures.
Courtesy of NIO
All three models run on the high-performance, centralized Adam supercomputer. With more than 1,000 trillion operations per second of performance provided by four DRIVE Orin SoCs, Adam can power a wide range of intelligent features, with enough headroom to add new capabilities over the air.
Courtesy of NIO
Using multiple SoCs, Adam integrates the redundancy and diversity necessary for safe autonomous operation. The first two SoCs process the 8GB of data produced every second by the vehicle’s sensor set.
The third Orin serves as a backup to ensure the system can operate safely in any situation. And the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on individual user preferences.
While NIO’s models vary in size and design, they all share the same intelligent DNA, so every customer has access to the cutting edge in AI transportation.
A NIO Way Forward
The NIO experience doesn’t end when the drive is over — it aims to create an entire lifestyle.
Customers in new markets won’t be buying the vehicles, they’ll sign for leases as long as 60 months or as short as one month. These subscriptions include insurance, maintenance, winter tires, a courtesy car, battery swapping and the option to upgrade battery services.
The purpose of the business model is to offer the utmost flexibility, so customers always have access to the best vehicle for their needs, whatever they may be and however often they may change.
Additionally, every customer has access to the NIO House. This community space offers co-working areas, cafes, workout facilities, playrooms for children and more. NIO Houses exist in more than 80 places around the world, with locations planned for Amsterdam, Berlin, Copenhagen, Düsseldorf, Frankfurt, Gothenburg, Hamburg, Rotterdam and Stockholm.
Courtesy of NIO
Deliveries to the expanded European markets are scheduled to start with the ET7 sedan on Sunday, Oct. 16, with the EL7 and ET5 set to ship in January and March of 2023, respectively.
NVIDIA and Oracle are teaming to make the power of AI accessible to enterprises across industries. These include healthcare, financial services, automotive and a broad range of natural language processing use cases driven by large language models, such as chatbots, personal assistants, document summarization and article completion.
Join NVIDIA and Oracle experts at Oracle CloudWorld, running Oct. 17-20 in Las Vegas, to learn more about technology breakthroughs and steps companies can take to unlock the potential of enterprise data with AI.
Attend the fireside chat featuring NVIDIA founder and CEO Jensen Huang and Oracle CEO Safra Catz taking place on Tuesday, Oct. 18, at 9 a.m. PT to learn how NVIDIA AI is being enabled for enterprises globally on Oracle.
NVIDIA and Oracle bring together all the key ingredients for speeding AI adoption for enterprises: the ability to securely access and manage data within Oracle’s Enterprise Data Management platforms; on-demand access to the massive computational power of NVIDIA-accelerated infrastructure at scale to build and train AI models using this data; and an NVIDIA AI developer performance-optimized stack that simplifies and accelerates building and deploying AI-enabled enterprise products and services at scale.
The NVIDIA AI platform, combined with Oracle Cloud Infrastructure, paves the way to an AI-powered enterprise, regardless of where a business is in its AI adoption journey. The platform offers GPU-accelerated deep learning frameworks, pretrained AI models, enterprise-grade software development kits and application-specific frameworks for various use cases.
Register for Oracle CloudWorld and dive into these sessions and demos to learn more:
Keynote Fireside Address: Driving Impactful Business Results — featuring Huang and Catz, in conversation with leaders of global brands, discovering how they solve complex problems by working with Oracle. This session takes place on Tuesday, Oct. 18, from 9-10:15 a.m. PT.
Oracle Making AI Approachable for Everyone — featuring Ian Buck, vice president of hyperscale and high-performance computing at NVIDIA, and Elad Ziklik, vice president of AI and data science services at Oracle. This session takes place on Tuesday, Oct. 18, from 11-11:45 a.m. PT.
MLOps at Scale With Kubeflow on Oracle Cloud Infrastructure — featuring Richard Wang, senior cloud and machine learning solutions architect at NVIDIA, and Sesh Dehalisan, distinguished cloud architect at Oracle. This session takes place on Tuesday, Oct. 18, from 12:15-1 p.m. PT.
Next-Generation AI Empowering Human Expertise — featuring Bryan Catanzaro, vice president of applied deep learning research at NVIDIA; Erich Elsen, co-founder and head of machine learning at Adept AI; and Rich Clayton, vice president of product strategy for analytics at Oracle. This session takes place on Tuesday, Oct. 18, from 12:30-1 p.m. PT.
NVIDIA’s Migration From On-Premises to MySQL HeatWave — featuring Chris May, senior manager at NVIDIA; Radha Chinnaswamy, consultant at NVIDIA; and Sastry Vedantam, MySQL master principal solution engineer at Oracle. This session takes place on Tuesday, Oct. 18, from 4-4:45 p.m. PT.
Scale Large Language Models With NeMo Megatron — featuring Richard Wang, senior cloud and machine learning solutions architect at NVIDIA; Anup Ojah, senior manager of cloud engineering at Oracle; and Tanina Cadwell, solutions architect at Vyasa Analytics. This session takes place on Wednesday, Oct. 19, from 11:30 a.m. to 12:15 p.m. PT.
Serve ML Models at Scale With Triton Inference Server on OCI — featuring Richard Wang, senior cloud and machine learning solutions architect at NVIDIA, and Joanne Lei, master principal cloud architect at Oracle. This session takes place on Wednesday, Oct. 19, from 1:15-2 p.m. PT.
Accelerating Java on the GPU — featuring Ken Hester, solutions architect director at NVIDIA, and Paul Sandoz, Java architect at Oracle. This session takes place on Thursday, Oct. 20, from 10:15-10:45 a.m. PT.
NVIDIA AI Software for Business Outcomes: Integrating NVIDIA AI Into Your Applications — featuring Kari Briski, vice president of software product management for AI and high-performance computing software development kits at NVIDIA. This session takes place on demand.
Visit NVIDIA’s Oracle CloudWorld showcase page to discover more about NVIDIA and Oracle’s collaboration and innovations for cloud-based solutions.
Alien invasions. Gritty dystopian megacities. Battlefields swarming with superheroes. As one of Hollywood’s top concept artists, Drew Leung can visualize any world you can think of, except one where AI takes his job.
He would know. He’s spent the past few months trying to make it happen, testing every AI tool he could. “If your whole goal is to use AI to replace artists, you’ll find it really disappointing,” Leung said.
Pros and amateurs alike, however, are finding these new tools intriguing. For amateur artists — who may barely know which way to hold a paintbrush — AI gives them almost miraculous capabilities.
Thanks to AI tools such as Midjourney, OpenAI’s Dall·E, DreamStudio, and open-source software such as Stable Diffusion, AI-generated art is everywhere, spilling out across the globe through social media such as Facebook and Twitter, the tight-knit communities on Reddit and Discord, and image-sharing services like Pinterest and Instagram.
The trend has sparked an uproarious discussion in the art community. Some are relying on AI to accelerate their creative process — doing in minutes what used to take a day or more, such as instantly generating mood boards with countless iterations on a theme.
Others, citing issues with how the data used to train these systems is collected and managed, are wary. “I’m frustrated because this could be really exciting if done right,” said illustrator and concept artist Karla Ortiz, who currently refuses to use AI for art altogether.
NVIDIA’s creative team provided a taste of what these tools can do in the hands of a skilled artist during NVIDIA founder and CEO Jensen Huang’s keynote at the most recent NVIDIA GTC technology conference.
“Artificial Intelligence, Leonardo da Vinci drawing style,” an image created by NVIDIA’s creative team using the Midjourney AI art tool.
Highlights included a woman representing AI created in the drawing style of Leonardo da Vinci and an image of 19th-century English mathematician Ada Lovelace, considered by many the first computer programmer, holding a modern game controller.
More Mechanical Than Magical
After months of experimentation, Leung — known for his work on more than a score of epic movies including Black Panther and Captain America: Civil War, among other blockbusters — compares AI art tools to a “kaleidoscope” that combines colors and shapes in unexpected ways with a twist of your wrist.
Used that way, some artists say AI is most interesting when an artist pushes it hard enough to break. AI can instantly reveal visual clichés — because it fails when asked to do things it hasn’t seen before, Leung said.
And because AI tools are fed by vast quantities of data, AI can expose biases across collections of millions of images — such as poor representation of people of color — because it struggles to produce images outside a narrow ideal.
New Technologies, OId Conversations
Such promises and pitfalls put AI at the center of conversations about the intersections of technology and technique, automation and innovation, that have been going on long before AI, or even computers, existed.
After Louis-Jacques-Mandé Daguerre invented photography in 1839, painter Charles Baudelaire declared photography “art’s most mortal enemy.”
With the motto, “You push the button, we do the rest,” George Eastman’s affordable handheld cameras made photography accessible to anyone in 1888. It took years for 19th-century promoter and photographer Alfred Stieglitz, who played a key role transforming photography into an accepted art form, to come around.
Remaking More Than Art
Over the next century new technologies, like color photography, offset printmaking and digital art, inspired new movements from expressionism to surrealism, pop art to post-modernism.
By the late 20th century, painters had learned to play with the idioms of photography, offset printing and even the line drawings common in instructional manuals to create complex commentaries on the world around them.
The emergence of AI art continues the cycle. And the technology driving it, called transformers, like the technologies that led to past art movements, is driving changes far outside the art world.
First introduced in 2017, transformers are a type of neural network that learns context and, thus, meaning, from data. They’re now among the most vibrant areas for research in AI.
A single pretrained model can perform amazing feats — including text generation, translation and even software programming — and is the basis of the new generation of AI that can turn text into detailed images.
The diffusion models powering AI image tools, such as Dall·E and Dall·E 2, are transformer-based generative models that refine and rearrange pixels again and again until the image matches a user’s text description.
More’s coming. NVIDIA GPUs — the parallel processing engines that make modern AI possible — are being fine-tuned to support ever more powerful applications of the technology.
Introduced earlier this year, the Hopper FP8 Transformer Engine in NVIDIA’s latest GPUs will soon be embedded across vast server farms, in autonomous vehicles and in powerful desktop GPUs.
Intense Conversations
All these possibilities have sparked intense conversations.
Artist Jason Allen ignited a worldwide controversy by winning a contest at the Colorado State Fair with an AI-generated painting.
Attorney Steven Frank has renewed old conversations in art history by using AI to reassess the authenticity of some of the world’s most mysterious artworks, such as “Salvator Mundi,” left, a painting now attributed to da Vinci.
Philosophers, ethicists and computer scientists such as Ahmed Elgammal at Rutgers University are debating if it’s possible to separate techniques that AI can mimic with the intentions of the human artists who created them.
Ortiz is among a number raising thorny questions about how the data used to train AI is collected and managed. And once an AI is trained on an image, it can’t unlearn what it’s been trained to do, Ortiz says.
Some, such as New York Times writer Kevin Roose, wonder if AI will eventually start taking away jobs from artists.
Others, such as Jason Scott, an artist and archivist at the Internet Archive, dismiss AI art as “no more dangerous than a fill tool.”
Such whirling conversations — about how new techniques and technologies change how art is made, why art is made, what it depicts, and how art, in turn, remakes us — have always been an element of art. Maybe even the most important element.
“Art is a conversation we are all invited to,” American author Rachel Hartman once wrote.
Ortiz says this means we should be thoughtful. “Are these tools assisting the artist, or are they there to be the artist?” she asked.
It’s a question all of us should ponder. Controversially, anthropologist Eric Gans connects the first act of imbuing physical objects with a special significance or meaning — the first art — to the origin of language itself.
In this context, AI will, inevitably, reshape some of humanity’s oldest conversations. Maybe even our very oldest conversation. The stakes could not be higher.
Featured image:Portrait of futuristic Ada Lovelace, playing video games, editorial photography style by NVIDIA’s creative team, using Midjourney.
High-end PC gaming arrives on more devices this GFN Thursday.
GeForce NOW RTX 3080 members can now stream their favorite PC games at up to 1600p and 120 frames per second in a Chrome browser. No downloads, no installs, just victory.
Even better, NVIDIA has worked with Google to support the newest Chromebooks, which are the first laptops custom built for cloud gaming, with gorgeous 1600p resolution 120Hz+ displays. They come with a free three-month GeForce NOW RTX 3080 membership, the highest performance tier.
On top of these new ways to play, this GFN Thursday brings hordes of fun with 11 new titles streaming from the cloud — including the Warhammer 40,000: Darktide closed beta, available Oct. 14-16.
High-Performance PC Gaming, Now on Chromebooks
Google’s newest Chromebooks are the first built for cloud gaming, and include GeForce NOW right out of the box.
These new cloud gaming Chromebooks — the Acer Chromebook 516 GE, the ASUS Chromebook Vibe CX55 Flip and the Lenovo Ideapad Gaming Chromebook — all include high refresh rate, high-resolution displays, gaming keyboards, fast WiFi 6 connectivity and immersive audio. And with the GeForce NOW RTX 3080 membership, gamers can instantly stream 1,400+ PC games from the GeForce NOW library at up to 1600p at 120 FPS.
That means Chromebook gamers can jump right into over 100 free-to-play titles, including major franchises like Fortnite, Genshin Impact and League of Legends. RTX 3080 members can explore the worlds of Cyberpunk 2077, Control and more with RTX ON, only through GeForce NOW. Compete online with ultra-low latency and other features perfect for playing.
The GeForce NOW app comes preinstalled on these cloud gaming Chromebooks, so users can jump straight into the gaming — just tap, search, launch and play. Plus, pin games from GeForce NOW right to the app shelf to get back into them with just a click.
For new and existing members, every cloud gaming Chromebook includes a free three-month RTX 3080 membership through the Chromebook Perks program.
Stop! Warhammer Time
Fatshark leaps thousands of years into the future to bring gamers Warhammer 40,000: Darktide on Tuesday, Nov. 30.
Gamers who’ve preordered on Steam can get an early taste of the game with a closed beta period, running Oct. 14-16.
Take back the city of Tertium from hordes of bloodthirsty foes in this intense and brutal action shooter.
Head to the industrial city of Tertium to combat the forces of Chaos, using Vermintide 2’s lauded melee system and a range of deadly Warhammer 40,000 weapons. Personalize your play style with a character-creation system and delve deep into the city to put a stop to the horrors that lurk.
The fun doesn’t stop there. Members can look for these new titles streaming this week:
Asterigos: Curse of the Stars (New release on Steam)
The age of electric vehicles has arrived and, with it, an entirely new standard for premium SUVs.
Polestar, the performance EV brand spun out from Volvo Cars, launched its third model today in Copenhagen. With the Polestar 3, the automaker has taken SUV design back to the drawing board, building a vehicle as innovative as the technology it features.
The EV premieres a new aerodynamic profile from the brand, in addition to sustainable materials and advanced active and passive safety systems. The Polestar 3 also maintains some attributes of a traditional SUV, including a powerful and wide stance.
Courtesy of Polestar
It features a 14.5-inch center display for easily accessible infotainment, in addition to 300 miles of battery range to tackle trips of any distance.
The Polestar 3 is the brand’s first SUV, as well as its first model to run on the high-performance, centralized compute of the NVIDIA DRIVE platform. This software-defined architecture lends the Polestar 3 its cutting-edge personality, making it an SUV that tops the list in every category.
Reigning Supreme
The crown jewel of a software-defined vehicle is its core compute — and the Polestar 3 is built with top-of-the-line hardware and software.
The NVIDIA DRIVE high-performance AI compute platform processes data from the SUV’s multiple sensors and cameras to enable advanced driver-assistance safety (ADAS) features and driver monitoring.
Courtesy of Polestar
This ADAS system combines technology from Zenseact, Luminar and Smart Eye that integrates seamlessly thanks to the centralized computing power of NVIDIA DRIVE.
By running on a software-defined architecture, these automated driving features will continue to gain new functionality via over-the-air updates and eventually perform autonomous highway driving.
The Polestar 3 customers’ initial purchase won’t remain the same years or even months later — it will be constantly improving and achieving capabilities not yet even dreamed of.
Charging Ahead
The Polestar 3 kicks off a new phase for the automaker, which is accelerating its product and international growth plans.
The SUV will begin deliveries late next year. Starting with the Polestar 3, the automaker expects to launch a new car every year for the next three years and aims to expand its presence to at least 30 global markets by the end of 2023.
The automaker is targeting 10x growth in global sales, to reach 290,000 vehicles sold by the end of 2025 from about 29,000 in 2021.
And with its future-forward SUV, Polestar is adding a dazzling jewel to its already star-studded crown.