AI in the Hand of the Artist

AI in the Hand of the Artist

Humans are wielding AI to create art, and a virtual exhibit that’s part of NVIDIA’s GPU Technology Conference showcases the stunning results.

The AI Art Gallery at NVIDIA GTC features pieces by a broad collection of artists, developers and researchers from around the world who are using AI to push the limits of artistic expression.

When AI is introduced into the artistic process, the artist feeds the machine data and code, explains Heather Schoell, senior art director at NVIDIA, who curated the online exhibit.

Once the output reveals itself, it’s up to the artist to determine if it stands up to their artistic style and desired message, or if the input needs to be adjusted, according to Schoell.

“The output reflects both the artist’s hand and the medium, in this case data, used for creation,” Schoell says.

The exhibit complements what has become the world’s premier AI conference.

GTC, running Oct. 5-9, will bring together researchers from industry and academia, startups and Fortune 500 companies.

So it’s only natural that artists would be among those putting modern AI to work.

“Through this collection we aim to share how the artist can partner with AI as both an artistic medium and creative collaborator,” Schoell explains.

The artists featured in the AI Art Gallery include:

  • Daniel Ambrosi – Dreamscapes fuses computational photography and AI to create a deeply textural environment.
  • Refik AnadolMachine Hallucinations, by the Turkish-born, Los Angeles-based conceptual artist known for his immersive architectural digital installations, such as a project at New York’s Chelsea Market that used projectors to splash AI-generated images of New York cityscapes to create what Anadol called a “machine hallucination.”
  • Sofia Crespo and Dark Fractures – Work from the Argentina-born artist and Berlin-based studio led by Feileacan McCormick uses GANs and NLP models to generate 3D insects in a virtual, digital space.
  • Scott Eaton – An artist, educator and creative technologist residing in London, who combines a deep understanding of human anatomy, traditional art techniques and modern digital tools in his uncanny, figurative artworks.
  • Oxia Palus – The NVIDIA Inception startup will uncover a new masterpiece by Leonardo da Vinci that resurrects a hidden sketch and reconstructs the painting style from one of the most famous artists of all time.
  • Anna Ridler – Three displays showing images of tulips that change based on Bitcoin’s price, created by the U.K. artist and researcher known for her work exploring the intersection of machine learning, nature and history.
  • Helena Sarin – Using her own drawings, sketches and photographs as datasets, Sarin trains her models to generate new visuals that serve as the basis of her compositions — in this case with type of neural network known as a generative adversarial network, or GAN. The Moscow-born artist has embedded 12 of these creations in a book of puns on the acronym GAN.
  • Pindar Van Arman – Driven by a collection of algorithms programmed to work with — and against — one another, the U.S.-based artist and roboticist’s creation uses a paintbrush, paint and canvas to create portraits that fuse the look and feel of a photo and a handmade sketch.

For a closer look, registered GTC attendees can go on a live, personal tour of two of our featured artists’ studios.

On Thursday, Oct. 8, you can virtually tour Van Arman’s Fort Worth, Texas, studio between 11 a.m.-12 p.m. Pacific time. And at 2 p.m. Pacific, you can tour Refik Anadol’s Los Angeles studio.

In addition, a pair of panel discussions, Thursday, Oct. 8, with AI Gallery artists will explore what led them to connect AI and fine art.

And starting Oct. 5, you can tune in to an on-demand GTC session featuring Oxia Palus co-founder George Cann, a Ph.D. candidate in space and climate physics at University College London.

Join us at the AI Art Gallery.

Register for GTC

The post AI in the Hand of the Artist appeared first on The Official NVIDIA Blog.

Read More

Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE

Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE

One of the leading EV startups in China is charging up its compute capabilities.

Li Auto announced today it would develop its next generation of electric vehicles using the high-performance, energy-efficient NVIDIA DRIVE AGX Orin. These new vehicles will be developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

The startup has become a standout brand in China over the past year. Its electric model lineup has led domestic sales of medium and large SUVs for eight consecutive months. With this latest announcement, the automaker can extend its lead to the autonomous driving industry.

NVIDIA Orin, the SoC at the heart of the future fleet, achieves 200 TOPS — nearly 7x the performance and 3x the energy efficiency of our previous generation SoC — and is designed to handle the large number of applications and deep neural networks that run simultaneously for automated and autonomous driving. Orin is designed to achieve the systematic safety standards such as ISO 26262 ASIL-D.

This centralized, high-performance system will enable software-defined, intelligent features in Li Auto’s upcoming electric vehicles, making them a smart choice for eco-friendly, safe and convenient driving.

“By cooperating with NVIDIA, Li Auto can benefit from stronger performance and the energy-efficient compute power needed to deliver both advanced driving and fully autonomous driving solutions to market,” said Kai Wang, CTO of Li Auto.

A Software-Defined Architecture

Today, a vehicle’s software functions are powered by dozens of electronic control units, known as ECUs, that are distributed throughout the car. Each is specialized — one unit controls windows and one the door locks, for example, and others control power steering and braking.

This fixed-function architecture is not compatible with intelligent and autonomous features. These AI-powered capabilities are software-defined, meaning they are constantly improving, and require a hardware architecture that supports frequent upgrades.

Vehicles equipped with NVIDIA Orin have the powerful, centralized compute necessary for this software-defined architecture. The SoC was born out of the data center, built with approximately 17 billion transistors to handle the large number of applications and deep neural networks for autonomous systems and AI-powered cockpits.

The NVIDIA Orin SoC

This high-performance platform will enable Li Auto to become one of the first automakers in China to deploy an independent, advanced autonomous driving system with its next-generation fleet.

The Road Ahead

This announcement is just the first step of a long-term collaboration between NVIDIA and Li Auto.

“The next-generation NVIDIA Orin SoC offers a significant leap in compute performance and energy efficiency,” said Rishi Dhall, vice president of autonomous vehicles at NVIDIA. “NVIDIA works closely with companies like Li Auto to help bring new AI-based autonomous driving capabilities to cutting-edge EVs in China and around the globe.”

By combining NVIDIA’s leadership in AI software and computing with Li Auto’s momentum in the electric vehicle space, together, these companies will develop vehicles that are better for the environment and safer for everyone.

The post Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Meet the Maker: Mr. Fascinate Encourages Kids to Get on the Cool Bus and Study STEM

Meet the Maker: Mr. Fascinate Encourages Kids to Get on the Cool Bus and Study STEM

STEM is dope. That’s the simple message that Justin “Mr. Fascinate” Shaifer evangelizes to young people around the world.

Through social media and other platforms, Shaifer fascinates children with STEM projects — including those that can be created using AI with NVIDIA Jetson products — in hopes that more students from underrepresented groups will be inspired to dive into the field. NVIDIA Jetson embedded systems allow anyone to create their own AI-based projects.

Growing up on Chicago’s South Side, Shaifer didn’t know anyone with a career in STEM he could look up to — at least no one he could relate to. Now, he’s become that role model for thousands of kids, working to prove that STEM is cool and attainable for anyone who has a passion for it.

About the Maker

Shaifer is a STEM advocate, animator and TV host who educates students about the importance of STEM and diversity within it. He has a YouTube channel, gives keynote speeches and hosts the Escape Lab live science show on Twitch.

He’s also the founder of Fascinate Inc., a nonprofit with the mission of exciting underrepresented students about careers in STEM and providing schools and after-school programs with fun science curricula.

The organization also launched the Magic Cool Bus project, filling a real-life bus with cutting-edge tech gadgets and bringing it to schools so students can hop on board and explore.

Growing up in a single-parent home, Shaifer was fascinated by science, earning scholarships from NASA and NOAA that covered his expenses to study marine and environmental science at Hampton University. He’s currently working toward a Ph.D. in science education at Columbia University.

His Inspiration

Shaifer was inspired to transition from being a scientist in a lab to a science educator for others in 2017, while volunteering at a museum in Washington.

“I was freestyle rapping about a carbon cycle exhibit, and this nine-year-old Black kid came up to me and said, ‘What do you do, man?’” said Shaifer.

When Shaifer told him he was a scientist, the child said, “That’s so cool. When I grow up, I want to be a scientist just like you!”

“That made me reflect on the fact that at nine years old, I’d never seen an example of a scientist that looked like me,” said Shaifer. “I realized that students need to be exposed to a role model in STEM that they can identify with, at scale.”

Later that year, Shaifer founded Fascinate Inc.

His Favorite Jetson Projects

Shaifer is passionate about exposing students to the world of AI, and he says using NVIDIA Jetson platform is a great way to do so.

Watch him highlight Jetson products:

NVIDIA Jetson Xavier NX Unboxing and Impression

NVIDIA SparkFun JetBot AI Kit Unboxing and Impression

One of Shaifer’s favorite real-world applications that uses the NVIDIA Jetson Nano developer kit is Qrio. The bot, created by Agustinus Nalwan, recognizes a toddler’s toy and plays a relevant YouTube video.

“Especially since I work with young kids, I think that’s a really cool application that allows a child to be engaged, interactive and always learning as they play with their toys,” said Shaifer.

Where to Learn More 

Get fascinated by STEM on Shaifer’s website and YouTube channel.

Discover tools, inspiration and three easy steps to help kickstart your project with AI on our “Get AI, Learn AI, Build AI” page.

The post Meet the Maker: Mr. Fascinate Encourages Kids to Get on the Cool Bus and Study STEM appeared first on The Official NVIDIA Blog.

Read More

Top Healthcare Innovators Share AI Developments at GTC

Top Healthcare Innovators Share AI Developments at GTC

Healthcare is under the microscope this year like never before. Hospitals are being asked to do more with less, and researchers are working around the clock to answer pressing questions.

NVIDIA’s GPU Technology Conference brings everything you need to know about the future of AI and HPC in healthcare together in one place.

Innovators across healthcare will come together at the event to share how they are using AI and GPUs to supercharge their medical devices and biomedical research.

Scores of on-demand talks and hands-on training sessions will focus on AI in medical imaging, genomics, drug discovery, medical instruments and smart hospitals.

And advancements powered by GPU acceleration in fields such as imaging, genomics and drug discovery, which are playing a vital role in COVID-19 research, will take center stage at the conference.

There are over 120 healthcare sessions taking place at GTC, which will feature amazing demos, hands-on training, breakthrough research and more from October 5-9.

Turning Months into Minutes for Drug Discovery

AI and HPC are improving speed, accuracy and scalability for drug discovery. Companies and researchers are turning to AI to enhance current methods in the field. Molecular simulation like docking, free energy pertubation (FEP) and molecular dynamics requires a huge amount of computing power. At every phase of drug discovery, researchers are incorporating AI methods to accelerate the process.

Here are some drug discovery sessions you won’t want to miss:

Architecting the Next Generation of Hospitals

AI can greatly improve hospital efficiency and prevent costs from ballooning. Autonomous robots can help with surgeries, deliver blankets to patients’ rooms and perform automatic check-ins. AI systems can search patient records, monitor blood pressure and oxygen saturation levels, flag thoracic radiology images that show pneumonia, take patient temperatures and notify staff immediately of changes.

Here are some sessions on smart hospitals you won’t want to miss:

Training AI for Medical Imaging

AI models are being developed at a rapid pace to optimize medical imaging analysis for both radiology and pathology. Get exposure to cutting-edge use cases for AI in medical imaging and how developers can use the NVIDIA Clara Imaging application framework to deploy their own AI applications.

Building robust AI requires massive amounts of data. In the past, hospitals and medical institutions have struggled to share and combine their local knowledge without compromising patient privacy, but federated learning is making this possible. The learning paradigm enables different clients to securely collaborate, train and contribute to a global model. Register for this session to learn more about federated learning and its use on AI COVID-19 model development from a panel of experts.

Must-see medical imaging sessions include:

Accelerating Genomic Analysis

Genomic data is foundational in making precision medicine a reality. As next-generation sequencing becomes more routine, large genomic datasets are becoming more prevalent. Transforming the sequencing data into genetic information is just the first step in a complicated, data-intensive workflow. With high performance computing, genomic analysis is being streamlined and accelerated to enable novel discoveries about the human genome.

Genomic sessions you won’t want to miss include:

The Best of MICCAI at GTC

This year’s GTC is also bringing to attendees the best of MICCAI, a conference focused on cutting-edge deep learning medical imaging research. Developers will have the opportunity to dive into the papers presented, connect with the researchers at a variety of networking opportunities, and watch on-demand trainings from the first ever MONAI Bootcamp hosted at MICCAI.

Game-Changing Healthcare Startups

Over 70 healthcare AI startups from the NVIDIA Inception program will showcase their latest breakthroughs at GTC. Get inspired by the AI- and HPC-powered technologies these startups are developing for personalized medicine and next-generation clinics.

Here are some Inception member-led talks not to miss:

Make New Connections, Share Ideas

GTC will have new ways to connect with fellow attendees who are blazing the trail for healthcare and biomedical innovation. Join a Dinner with Strangers conversation to network with peers on topics spanning drug discovery, medical imaging, genomics and intelligent instrument development. Or, book a Braindate to have a knowledge-sharing conversation on a topic of your choice with a small group or one-on-one.

Learn more about networking opportunities at GTC.

Brilliant Minds Never Turn Off

GTC will showcase the hard work and groundbreaking discoveries of developers, researchers, engineers, business leaders and technologists from around the world. Nowhere else can you access five days of continuous programming with regionally tailored content. This international event will unveil the future of healthcare technology, all in one place.

Check out the full healthcare session lineup at GTC, including talks from over 80 startups using AI to transform healthcare, and register for the event today.

The post Top Healthcare Innovators Share AI Developments at GTC appeared first on The Official NVIDIA Blog.

Read More

“Insanely Fast,” “Biggest Generational Leap” “New High-End Gaming Champion”: Reviewers Rave for GeForce RTX 3080

Reviewers have just finished testing NVIDIA’s new flagship GPU — the NVIDIA RTX 3080 — and the raves are rolling in.

NVIDIA CEO Jensen Huang promised “a giant step into the future,” when he revealed NVIDIA’s GeForce RTX 30 Series GPUs on Sept. 1.

The NVIDIA Ampere GPU architecture, introduced in May, has already stormed through supercomputing and hyperscale data centers.

But no one knew for sure what the new architecture would be capable of when unleashed on gaming.

Now they do:

The GeForce RTX 30 Series, NVIDIA’s second-generation RTX GPUs, deliver up to 2x the performance and up to 1.9x the power efficiency over previous-generation GPUs.

This leap in performance will deliver incredible performance in upcoming games such as Cyberpunk 2077, Call of Duty: Black Ops Cold War and Watch Dogs: Legion, currently bundled with select GeForce RTX 3080 graphics cards at participating retailers.

In addition to the trio of new GPUs — the flagship GeForce RTX 3080, the GeForce RTX 3070 and the “ferocious” GeForce RTX 3090 — gamers get a slate of new tools.

They include NVIDIA Reflex — which makes competitive gamers quicker; NVIDIA Omniverse Machinima — for those using real-time computer graphics engines to create movies; and NVIDIA Broadcast — which harnesses AI to build virtual broadcast studios for streamers.

And new 2nd Gen Ray Tracing Cores and 3rd Gen Tensor Cores make ray-traced and DLSS-accelerated experiences even faster.

GeForce RTX 3080 will be out from NVIDIA and our partners Sept. 17.

The post “Insanely Fast,” “Biggest Generational Leap” “New High-End Gaming Champion”: Reviewers Rave for GeForce RTX 3080 appeared first on The Official NVIDIA Blog.

Read More

More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot

More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot

It could be just a fender bender or an unforeseen rain shower, but a few seconds of disruption can translate to extra minutes or even hours of mind-numbing highway traffic.

But how much of this congestion could be avoided with AI at the wheel?

That’s what the Contra Costa Transportation Authority is working to determine in one of three federally funded automated driving system pilots in the next few years. Using vehicles retrofitted with the NVIDIA DRIVE AGX Pegasus platform, the agency will estimate just how much intelligent transportation can improve the efficiency of everyday commutes.

“As the population grows, there are more demands on roadways and continuing to widen them is just not sustainable,” said Randy Iwasaki, executive director of the CCTA. “We need to find better ways to move people, and autonomous vehicle technology is one way to do that.”

The CCTA was one of eight awardees – and the only local agency – of the Automated Driving System Demonstration Grants Program from the U.S. Department of Transportation, which aims to test the safe integration of self-driving cars into U.S. roads.

The Bay Area agency is using the funds for the highway pilot, as well as two other projects to develop robotaxis equipped with self-docking wheelchair technology and test autonomous shuttles for a local retirement community.

A More Intelligent Interstate

From the 101 to the 405, California is known for its constantly congested highways. In Contra Costa, Interstate 680 is one of those high-traffic corridors, funneling many of the area’s 120,000 daily commuters. This pilot will explore how the Highway Capacity Manual – which sets assumptions for modeling freeway capacity – can be updated to incorporate future automated vehicle technology.

Iwasaki estimates that half of California’s congestion is recurrent, meaning demand for roadways is higher than supply.  The other half is non-recurrent and can be attributed to things like weather events, special events — such as concerts or parades — and accidents. By eliminating human driver error, which has been estimated by the National Highway Traffic Safety Administration to be the cause of 94 percent of traffic accidents, the system becomes more efficient and reliable.

Autonomous vehicles don’t get distracted or drowsy, which are two of the biggest causes of human error while driving. They also use redundant and diverse sensors as well as high-definition maps to detect and plan the road ahead much farther than a human driver can.

These attributes make it easier to maintain constant speeds as well as space for vehicles to merge in and out of traffic for a smoother daily commute.

Driving Confidence

The CCTA will be using a fleet of autonomous test vehicles retrofitted with sensors and NVIDIA DRIVE AGX to gauge how much this technology can improve highway capacity.

The NVIDIA DRIVE AGX Pegasus AI compute platform uses the power of two Xavier systems-on-a-chip and two NVIDIA Turing architecture GPUs to achieve an unprecedented 320 trillion operations per second of supercomputing performance. The platform is designed and built for Level 4 and Level 5 autonomous systems, including robotaxis.

NVIDIA DRIVE AGX Pegasus

Iwasaki said the agency tapped NVIDIA for this pilot because the company’s vision matches its own: to solve real problems that haven’t been solved before, using proactive safety measures every step of the way.

With half of adult drivers reporting they’re fearful of self-driving technology, this approach to autonomous vehicles is critical to gaining public acceptance, he said.

“We need to get the word out that this technology is safer and let them know who’s behind making sure it’s safer,” Iwasaki said.

The post More Space, Less Jam: Transportation Agency Uses NVIDIA DRIVE for Federal Highway Pilot appeared first on The Official NVIDIA Blog.

Read More

AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines

AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines

Christian Sanz isn’t above trying disguises to sneak into places. He once put on a hard hat, vest and steel-toed boots to get onto the construction site of the San Francisco 49ers football stadium to explore applications for his drone startup.

That bold move scored his first deal.

For the entrepreneur who popularized drones in hackathons in 2012 as founder of the Drone Games matches, starting Skycatch in 2013 was a logical next step.

“We decided to look for more industrial uses, so I went and bought construction gear and was able to blend in, and in many cases people didn’t know I wasn’t working for them as I was collecting data,” Sanz said.

Skycatch has since grown up: In recent years the San Francisco-based company has been providing some of the world’s largest mining and construction companies its AI-enabled automated drone surveying and analytics platform. The startup, which has landed $47 million in funding, promises customers automated visibility over operations.

At the heart of the platform is the NVIDIA Jetson TX2-driven Edge1 edge computer and base station. It can create 2D maps and 3D point clouds in real-time, as well as pinpoint features  to within five-centimeter accuracy. Also, it runs AI models to do split-second inference in the field to detect objects.

Today, Skycatch announced its new Discover1 device. The Discover1 connects to industrial machines, enabling customers to plug in a multitude of sensors that can expand the data gathering of Skycatch.

The Discover1 sports a Jetson Nano inside to facilitate the collection of data from sensors and enable computer vision and machine learning on the edge. The device has LTE and WiFi connectivity to stream data to the cloud.

Changing-Tracking AI

Skycatch can capture 3D images of job sites for merging against blueprints to monitor changes.

Such monitoring for one large construction site showed that electrical conduit pipes were installed in the wrong spot. Concrete would be poured next, cementing them in place. Catching the mistake early helped avoid a much costlier revision later.

Skycatch says that customers using its services can expect to compress the timelines on their projects as well as reduce costs by catching errors before they become bigger problems.

Surveying with Speed

Japan’s Komatsu, one of the world’s leading makers of bulldozers, excavators and other industrial machines, is an early customer of Skycatch.

With Japan facing a labor shortage, the equipment maker was looking for ways to help automate its products. One bottleneck was surveying a location, which could take days, before unleashing the machines.

Skycatch automated the process with its drone platform. The result for Komatsu is that less-skilled workers can generate a 3D map of a job site within 30 minutes, enabling operators to get started sooner with the land-moving beasts.

Jetson for AI

As Skycatch was generating massive sums of data, the company’s founder realized they needed more computing capability to handle it. Also, given the environment in which they were operating, the computing had to be done on the edge while consuming minimal power.

They turned to the Jetson TX2, which provides server-level AI performance using the CUDA-enabled NVIDIA Pascal GPU in a small form factor and taps as little as 7.5 watts of power. It’s high memory bandwidth and wide range of hardware interfaces in a rugged form factor are ideal for the industrial environments Skycatch operates in.

Sanz says that “indexing the physical world” is demanding because of all the unstructured data of photos and videos, which require feature extraction to “make sense of it all.”

“When the Jetson TX2 came out, we were super excited. Since 2017, we’ve rewritten our photogrammetry engine to use the CUDA language framework so that we can achieve much faster speed and processing,” Sanz said.

Remote Bulldozers

The Discover1 can collect data right from the shovel of a bulldozer. Inertial measurement unit, or IMU, sensors can be attached to the Discover1 on construction machines to track movements from the bulldozer’s point of view.

One of the largest mining companies in the world uses the Discover1 in pilot tests to help remotely steer its massive mining machines in situations too dangerous for operators.

“Now you can actually enable 3D viewing of the machine to someone who is driving it remotely, which is much more affordable,” Sanz said.

 

Skycatch is a member of NVIDIA Inception, a virtual accelerator program that helps startups in AI and data science get to market faster.

The post AI From the Sky: Stealth Entrepreneur’s Drone Platform Sees into Mines appeared first on The Official NVIDIA Blog.

Read More

Letter From Jensen: Creating a Premier Company for the Age of AI

Letter From Jensen: Creating a Premier Company for the Age of AI

NVIDIA founder and CEO Jensen Huang sent the following letter to NVIDIA employees today:

Hi everyone, 

Today, we announced that we have signed a definitive agreement to purchase Arm. 

Thirty years ago, a visionary team of computer scientists in Cambridge, U.K., invented a new CPU architecture optimized for energy-efficiency and a licensing business model that enables broad adoption. Engineers designed Arm CPUs into everything from smartphones and PCs to cloud data centers and supercomputers. An astounding 180 billion computers have been built with Arm — 22 billion last year alone. Arm has become the most popular CPU in the world.   

Simon Segars, its CEO, and the people of Arm have built a great company that has shaped the computer industry and nearly every technology market in the world. 

We are joining arms with Arm to create the leading computing company for the age of AI. AI is the most powerful technology force of our time. Learning from data, AI supercomputers can write software no human can. Amazingly, AI software can perceive its environment, infer the best plan, and act intelligently. This new form of software will expand computing to every corner of the globe. Someday, trillions of computers running AI will create a new internet — the internet-of-things — thousands of times bigger than today’s internet-of-people.   

Uniting NVIDIA’s AI computing with the vast reach of Arm’s CPU, we will engage the giant AI opportunity ahead and advance computing from the cloud, smartphones, PCs, self-driving cars, robotics, 5G, and IoT. 

NVIDIA will bring our world-leading AI technology to Arm’s ecosystem while expanding NVIDIA’s developer reach from 2 million to more than 15 million software programmers. 

Our R&D scale will turbocharge Arm’s roadmap pace and accelerates data center, edge AI, and IoT opportunities. 

Arm’s business model is brilliant. We will maintain its open-licensing model and customer neutrality, serving customers in any industry, across the world, and further expand Arm’s IP licensing portfolio with NVIDIA’s world-leading GPU and AI technology. 

Arm’s headquarter will remain in Cambridge and continue to be a cornerstone of the U.K. technology ecosystem. NVIDIA will retain the name and strong brand identity of Arm. Simon and his management team are excited to be joining NVIDIA.  

Arm gives us the critical mass to invest in the U.K. We will build a world-class AI research center in Cambridge — the university town of Isaac Newton and Alan Turing, for whom NVIDIA’s Turing GPUs and Isaac robotics platform were named. This NVIDIA research center will be the home of a state-of-the-art AI supercomputer powered by Arm CPUs. The computing infrastructure will be a major attraction for scientists from around the world doing groundbreaking research in healthcare, life sciences, robotics, self-driving cars, and other fields. This center will serve as our European hub to collaborate with universities, industrial partners, and startups. It will also be the NVIDIA Deep Learning Institute for Europe, where we teach the methods of applying this marvelous AI technology.  

The foundation built by Arm and NVIDIA employees has provided this fantastic opportunity to create the leading computing company for the age of AI. The possibilities of our combined companies are beyond exciting.   

I can’t wait. 

Jensen

The post Letter From Jensen: Creating a Premier Company for the Age of AI appeared first on The Official NVIDIA Blog.

Read More

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

NVIDIA and Arm to Create World-Class AI Research Center in Cambridge

Artificial intelligence is the most powerful technology force of our time. 

It is the automation of automation, where software writes software. While AI began in the data center, it is moving quickly to the edge — to stores, warehouses, hospitals, streets, and airports, where smart sensors connected to AI computers can speed checkouts, direct forklifts, orchestrate traffic, and save power. In time, there will be trillions of these small autonomous computers powered by AI, connected by massively powerful cloud data centers in every corner of the world.

But in many ways, the field is just getting started. That’s why we are excited to be creating a world-class AI laboratory in Cambridge, at the Arm headquarters: a Hadron collider or Hubble telescope, if you like, for artificial intelligence.  

NVIDIA, together with Arm, is uniquely positioned to launch this effort. NVIDIA is the leader in AI computing, while Arm is present across a vast ecosystem of edge devices, with more than 180 billion units shipped. With this newly announced combination, we are creating the leading computing company for the age of AI. 

Arm is an incredible company and it employs some of the greatest engineering minds in the world. But we believe we can make Arm even more incredible and take it to even higher levels. We want to propel it — and the U.K. — to global AI leadership.

We will create an open center of excellence in the area once home to giants like Isaac Newton and Alan Turing, for whom key NVIDIA technologies are named. Here, leading scientists, engineers and researchers from the U.K. and around the world will come develop their ideas, collaborate and conduct their ground-breaking work in areas like healthcare, life sciences, self-driving cars and other fields. We want the U.K. to attract the best minds and talent from around the world. 

The center in Cambridge will include: 

  • An Arm/NVIDIA-based supercomputer. Expected to be one of the most powerful AI supercomputers in the world, this system will combine state-of-the art Arm CPUs, NVIDIA’s most advanced GPU technology, and NVIDIA Mellanox DPUs, along with high-performance computing and AI software from NVIDIA and our many partners. For reference, the world’s fastest supercomputer, Fugaku in Japan, is Arm-based, and NVIDIA’s own supercomputer Selene is the seventh most powerful system in the world.  
  • Research Fellowships and Partnerships. In this center, NVIDIA will expand research partnerships within the U.K., with academia and industry to conduct research covering leading-edge work in healthcare, autonomous vehicles, robotics, data science and more. NVIDIA already has successful research partnerships with King’s College and Oxford. 
  • AI Training. NVIDIA’s education wing, the Deep Learning Institute, has trained more than 250,000 students on both fundamental and applied AI. NVIDIA will create an institute in Cambridge, and make our curriculum available throughout the U.K. This will provide both young people and mid-career workers with new AI skills, creating job opportunities and preparing the next generation of U.K. developers for AI leadership. 
  • Startup Accelerator. Much of the leading-edge work in AI is done by startups. NVIDIA Inception, a startup accelerator program, has more than 6,000 members — with more than 400 based in the U.K. NVIDIA will further its investment in this area by providing U.K. startups with access to the Arm supercomputer, connections to researchers from NVIDIA and partners, technical training and marketing promotion to help them grow. 
  • Industry Collaboration. The NVIDIA AI research facility will be an open hub for industry collaboration, providing a uniquely powerful center of excellence in Britain. NVIDIA’s industry partnerships include GSK, Oxford Nanopore and other leaders in their fields. From helping to fight COVID-19 to finding new energy sources, NVIDIA is already working with industry across the U.K. today — but we can and will do more. 

We are ambitious. We can’t wait to build on the foundations created by the talented minds of NVIDIA and Arm to make Cambridge the next great AI center for the world. 

The post NVIDIA and Arm to Create World-Class AI Research Center in Cambridge appeared first on The Official NVIDIA Blog.

Read More

Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics

Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics

NVIDIA Research comprises more than 200 scientists around the world driving innovation across a range of industries. One of its central figures is David Luebke, who founded the team in 2006 and is now the company’s vice president of graphics research.

Luebke spoke with AI Podcast host Noah Kravitz about what he’s working on. He’s especially focused on the interaction between AI and graphics. Rather than viewing the two as conflicting endeavors, Luebke argues that AI and graphics go together “like peanut butter and jelly.”

NVIDIA Research proved that with StyleGAN2, the second iteration of the generative adversarial network StyleGAN. Trained on high-resolution images, StyleGAN2 takes numerical input and produces realistic portraits.

Creating images comparable to those generated in films — which could take up to weeks to create just a single frame — the first version of StyleGAN only takes 24 milliseconds to produce an image.

Luebke envisions the future of GANs as an even larger collaboration between AI and graphics. He predicts that GANs such as those used in StyleGAN will learn to produce the key elements of graphics: shapes, materials, illumination and even animation.

Key Points From This Episode:

  • AI is especially useful in graphics by replacing or augmenting components of the traditional computer graphics pipeline, from content creation to mesh generation to realistic character animation.
  • Luebke researches a range of topics, one of which is virtual and augmented reality. It was, in fact, what inspired him to pursue graphics research — learning about VR led him to switch majors from chemical engineering.
  • Displays are a major stumbling block in virtual and augmented reality, he says. He emphasizes that VR requires high frame rates, low latency and very high pixel density.

Tweetables:

“Artificial intelligence, deep neural networks — that is the future of computer graphics” — David Luebke [2:34]

“[AI], like a renaissance artist, puzzled out the rules of perspective and rotation” — David Luebke [16:08]

You Might Also Like

NVIDIA Research’s Aaron Lefohn on What’s Next at Intersection of AI and Computer Graphics

Real-time graphics technology, namely, GPUs, sparked the modern AI boom. Now modern AI, driven by GPUs, is remaking graphics. This episode’s guest is Aaron Lefohn, senior director of real-time rendering research at NVIDIA. Aaron’s international team of scientists played a key role in founding the field of AI computer graphics.

GauGAN Rocket Man: Conceptual Artist Uses AI Tools for Sci-Fi Modeling

Ever wondered what it takes to produce the complex imagery in films like Star Wars or Transformers? Here to explain the magic is Colie Wertz, a conceptual artist and modeler who works on film, television and video games. Wertz discusses his specialty of hard modeling, in which he produces digital models of objects with hard surfaces like vehicles, robots and computers.

Cycle of DOOM Now Complete: Researchers Use AI to Generate New Levels for Seminal Video Game

DOOM, of course, is foundational to 3D gaming. 3D gaming, of course, is foundational to GPUs. GPUs, of course, are foundational to deep learning, which is, now, thanks to a team of Italian researchers, two of whom we’re bringing to you with this podcast, being used to make new levels for … DOOM.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Perfect Pairing: NVIDIA’s David Luebke on the Intersection of AI and Graphics appeared first on The Official NVIDIA Blog.

Read More