New Earth Simulator to Take on Planet’s Biggest Challenges

New Earth Simulator to Take on Planet’s Biggest Challenges

A new supercomputer under construction is designed to tackle some of the planet’s toughest life sciences challenges by speedily crunching vast quantities of environmental data.

The Japan Agency for Marine-Earth Science and Technology, or JAMSTEC, has commissioned tech giant NEC to build the fourth generation of its Earth Simulator. The new system, scheduled to become operational in March, will be based around SX-Aurora TSUBASA vector processors from NEC and NVIDIA A100 Tensor Core GPUs, all connected with NVIDIA Mellanox HDR 200Gb/s InfiniBand networking.

This will give it a maximum theoretical performance of 19.5 petaflops, putting it in the highest echelons of the TOP500 supercomputer ratings.

The new system will benefit from a multi-architecture design, making it suited to various research and development projects in the earth sciences field. In particular, it will act as an execution platform for efficient numerical analysis and information creation, coordinating data relating to the global environment.

Its work will span marine resources, earthquakes and volcanic activity. Scientists will gain deeper insights into cause-and-effect relationships in areas such as crustal movement and earthquakes.

The Earth Simulator will be deployed to predict and mitigate natural disasters, potentially minimizing loss of life and damage in the event of another natural disaster like the earthquake and tsunami that hit Japan in 2011.

Earth Simulator will achieve this by running large-scale simulations at high speed in ways previous generations of Earth Simulator couldn’t. The intent is also to have the system play a role in helping governments develop a sustainable socio-economic system.

The new Earth Simulator promises to deliver a multitude of vital environmental information. It also represents a quantum leap in terms of its own environmental footprint.

Earth Simulator 3, launched in 2015, offered a performance of 1.3 petaflops. It was a world beater at the time, outstripping Earth Simulators 1 and 2, launched in 2002 and 2009, respectively.

The fourth-generation model will deliver more than 15x the performance of its predecessor, while keeping the same level of power consumption and requiring around half the footprint. It’s able to achieve these feats thanks to major research and development efforts from NVIDIA and NEC.

The latest processing developments are also integral to the Earth Simulator’s ability to keep up with rising data levels.

Scientific applications used for earth and climate modelling are generating increasing amounts of data that require the most advanced computing and network acceleration to give researchers the power they need to simulate and predict our world.

NVIDIA Mellanox HDR 200Gb/s InfiniBand networking with in-network compute acceleration engines combined with NVIDIA A100 Tensor Core GPUs and NEC SX-Aurora TSUBASA provides JAMSTEC a world-leading marine research platform critical for expanding earth and climate science and accelerating discoveries.

The post New Earth Simulator to Take on Planet’s Biggest Challenges appeared first on The Official NVIDIA Blog.

Read More

Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim

Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim

When it comes to autonomous vehicle simulation testing, every detail must be on point.

With its high-fidelity automotive simulation model (ASM) on NVIDIA DRIVE Sim, global automotive supplier dSPACE is helping developers keep virtual self-driving true to the real world. By combining the modularity and openness of the DRIVE Sim simulation software platform with highly accurate vehicle models like dSPACE’s, every minor aspect of an AV can be thoroughly recreated, tested and validated.

The dSPACE ASM vehicle dynamics model makes it possible to simulate elements of the car — suspension, tires, brakes — all the way to the full vehicle powertrain and its interaction with the electronic control units that power actions such as steering, braking and acceleration.

As the world continues to work from home, simulation has become an even more crucial tool in autonomous vehicle development. However, to be effective, it must be able to translate to real-world driving.

dSPACE’s modeling capabilities are key to understanding vehicle behavior in diverse conditions, enabling the exhaustive and high-fidelity testing required for safe self-driving deployment.

Detailed Validation

High-fidelity simulation is more than just a realistic-looking car driving in a recreated traffic scenario. It means in any given situation, the simulated vehicle will behave just as a real vehicle driving in the real world would.

If an autonomous vehicle suddenly brakes on a wet road, there are a range of forces that affect how and where the vehicle stops. It could slide further than intended or fishtail, depending on the weather and road conditions. These possibilities require the ability to simulate dynamics such as friction and yaw, or the way the vehicle moves vertically.

The dSPACE ASM vehicle dynamics model includes these factors, which can then be compared with a real vehicle in the same scenario. It also tests how the same model acts in different simulation environments, ensuring consistency with both on-road driving and virtual fleet testing.

A Comprehensive and Diverse Platform

The NVIDIA DRIVE Sim platform taps into the computing horsepower of NVIDIA RTX GPUs to deliver a revolutionary, scalable, cloud-based computing platform, capable of generating billions of qualified miles for autonomous vehicle testing.

It’s open, meaning both users and partners can incorporate their own models in simulation for comprehensive and diverse driving scenarios.

dSPACE chose to integrate its vehicle dynamics ASM with DRIVE Sim due to its ability to scale for a wide range of testing conditions. When running on the NVIDIA DRIVE Constellation platform, it can perform both software-in-the-loop and hardware-in-the-loop testing, which includes the in-vehicle AV computer controlling the vehicle in the simulation process. dSPACE’s broad expertise and long track-record in hardware-in-the-loop simulation make for a seamless implementation of ASM on DRIVE Constellation.

Learn more about the dSPACE ASM vehicle dynamics in the DRIVE Sim platform at the company’s upcoming GTC sessionregister before Sept. 25 to receive Early Bird pricing.

The post Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim appeared first on The Official NVIDIA Blog.

Read More

Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst

Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst

Jeff Herbst is a fixture of the AI startup ecosystem. Which makes sense since he’s the VP of business development at NVIDIA and head of NVIDIA Inception, a virtual accelerator that currently has over 6,000 members in a wide range of industries.

Ahead of the GPU Technology Conference, taking place Oct. 5-9, Herbst joined AI Podcast host Noah Kravitz to talk about what opportunities are available to startups at the conference, and how NVIDIA Inception is accelerating startups in every industry.

Herbst, who now has almost two decades at NVIDIA under his belt, studied computer graphics at Brown University and later became a partner at a Silicon Valley premier technology law firm. He’s served as a board member and observer for dozens of startups over his career.

On the podcast, he provides his perspective on the future of the NVIDIA Inception program. As AI continues to expand into every industry, Herbst predicts that more and more startups will incorporate GPU computing.

Those interested can learn more through NVIDIA Inception programming at GTC, which will bring together the world’s leading AI startups and venture capitalists. They’ll participate in activities such as the NVIDIA Inception Premier Showcase, where some of the most innovative AI startups in North America will present, and a fireside chat with Herbst, NVIDIA founder and CEO Jensen Huang, and several CEOs of AI startups.

Key Points From This Episode:

  • Herbst’s interest in supporting an AI startup ecosystem began in 2008 at the NVISION Conference — the precursor to GTC. The conference held an Emerging Company Summit, which brought together startups, reporters and VCs, and made Herbst realize that there were many young companies using GPU computing that could benefit from NVIDIA’s support.
  • Herbst provides listeners with an insider’s perspective on how NVIDIA expanded from computer graphics to the cutting edge of AI and accelerated computing, describing how it was clear from his first days at the company that NVIDIA envisioned a future where GPUs were essential to all industries.

Tweetables:

“We love startups. Startups are the future, especially when you’re working with a new technology like GPU computing and AI” — Jeff Herbst [14:06]

“NVIDIA is a horizontal platform company — we build this amazing platform on which other companies, particularly software companies, can build their businesses” — Jeff Herbst [27:49]

You Might Also Like

AI Startup Brings Computer Vision to Customer Service

When your appliances break, the last thing you want to do is spend an hour on the phone trying to reach a customer service representative. Using computer vision, Drishyam.AI analyzes the issue and communicates directly with manufacturers, rather than going through retail outlets.

How Vincent AI Uses a Generative Adversarial Network to Let You Sketch Like Picasso

If you’ve only ever been able to draw stick figures, this is the application for you. Vincent AI turns scribbles into a work of art inspired by one of seven artistic masters. Listen in to hear from Monty Barlow, machine learning director for Cambridge Consultants — the technology development house behind the app.

A USB Port for Your Body? Startup Uses AI to Connect Medical Devices to Nervous System

Think of it as a USB port for your body. Emil Hewage is the co-founder and CEO at Cambridge Bio-Augmentation Systems, a neural engineering startup. The UK startup is building interfaces that use AI to help plug medical devices into our nervous systems.

The post Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst appeared first on The Official NVIDIA Blog.

Read More

Surfing Gravity’s Waves: HPC+AI Hang a Cosmic Ten

Surfing Gravity’s Waves: HPC+AI Hang a Cosmic Ten

Eliu Huerta is harnessing AI and high performance computing (HPC) to observe the cosmos more clearly.

For several years, the astrophysics researcher has been chipping away at a grand challenge, using data to detect signals produced by collisions of black holes and neutron stars. If his next big design for a neural network is successful, astrophysicists will use it to find more black holes and study them in more detail than ever.

Such insights could help answer fundamental questions about the universe. They may even add a few new pages to the physics textbook.

Huerta studies gravitational waves, the echoes from dense stellar remnants that collided long ago and far away. Since Albert Einstein first predicted them in his theory of relativity, academics debated whether these ripples in the space-time fabric really exist.

Researchers ended the debate in 2015 when they observed gravitational waves for the first time. They used pattern-matching techniques on data from the Laser Interferometer Gravitational-Wave Observatory (LIGO), home to some of the most sensitive instruments in science.

Detecting Black Holes Faster with AI

Confirming the presence of just one collision took a supercomputer to process data the instruments could gather in a single day. In 2017, Huerta’s team showed how a deep neural network running on an NVIDIA GPU could find gravitational waves with the same accuracy in a fraction of the time.

“We were orders of magnitude faster and we could even see signals the traditional techniques missed and we did not train our neural net for,” said Huerta, who leads AI and gravity groups at the National Center for Supercomputing Applications at the University of Illinois, Urbana-Champaign.

The AI model Huerta used was based on data from tens of thousands of waveforms. He trained it on a single NVIDIA GPU in less than three hours.

Seeing in Detail How Black Holes Spin

This year, Huerta and two of his students created a more sophisticated neural network that can detect how two colliding black holes spin. Their AI model even accurately measured the faint signals of a small black hole when it was merging with a larger one.

It required data on 1.5 million waveforms. An IBM POWER9-based system with 64 NVIDIA V100 Tensor Core GPUs took 12 hours to train the resulting neural network.

To accelerate their work, Huerta’s team got access to 1,536 V100 GPUs on 256 nodes of the IBM AC922 Summit supercomputer at Oak Ridge National Laboratory.

Taking advantage of NVIDIA NVLink, a connection between Summit’s GPUs and its IBM POWER9 CPUs, they trained the AI model in just 1.2 hours.

The results, described in a paper in Physics Letters B, “show how the combination of AI and HPC can solve grand challenges in astrophysics,” he said.

Interestingly, the team’s work is based on WaveNet, a popular AI model for converting text-to-speech. It’s one of many examples of how AI technology that’s rapidly evolving in consumer and enterprise use cases is crossing over to serve the needs of cutting-edge science.

The Next Big Leap into Black Holes

So far, Huerta has used data from supercomputer simulations to detect and describe the primary characteristics of gravitational waves. Over the next year, he aims to use actual LIGO data to capture the more nuanced secondary characteristics of gravitational waves.

“It’s time to go beyond low-hanging fruit and show the combination of HPC and AI can address production-scale problems in astrophysics that neither approach can accomplish separately,” he said.

The new details could help scientists determine more accurately where black holes collided. Such information could help them more accurately calculate the Hubble constant, a measure of how fast the universe is expanding.

The work may require tracking as many as 200 million waveforms, generating training datasets 100x larger than Huerta’s team used so far. The good news is, as part of their July paper, they’ve already determined their algorithms can scale to at least 1,024 nodes on Summit.

Tallying Up the Promise of HPC+AI

Huerta believes he’s just scratching the surface of the promise of HPC+AI. “The datasets will continue to grow, so to run production algorithms you need to go big, there’s no way around that,” he said.

Meanwhile, use of AI is expanding to adjacent areas. The team used neural nets to classify the many, many galaxies found in electromagnetic surveys of the sky, work NVIDIA CEO Jensen Huang highlighted in his GTC keynote in May.

Separately, one of Huerta’s grad students used AI to describe the turbulence when neutron stars merge more efficiently than previous techniques. “It’s another place where we can go into the traditional software stack scientists use and replace an existing model with an accelerated neural network,” Huerta said.

To accelerate the adoption of its work, the team has released as open source code its AI models for cosmology and gravitational wave astrophysics.

“When people read these papers they may think it’s too good to be true, so we let them convince themselves that we are getting the results we reported,” he said.

The Road to Space Started at Home

As is often the case with landmark achievements, there’s a parent to thank.

“My dad was an avid reader. We spent lots of time together doing math and reading books on a wide range of topics,” Huerta recalled.

“When I was 13, he brought home The Meaning of Relativity by Einstein. It was way over my head, but a really interesting read.

“A year or so later he bought A Brief History of Time by Stephen Hawking. I read it and thought it would be great to go to Cambridge and learn about gravity. Years later that actually happened,” he said.

The rest is a history that Huerta is still writing.

For more on Huerta’s work, check on an article from Oak Ridge National Laboratory.

At top: An artist’s impression of gravitational waves generated by binary neutron stars. Credit: R. Hurt, Caltech/NASA Jet Propulsion Lab

The post Surfing Gravity’s Waves: HPC+AI Hang a Cosmic Ten appeared first on The Official NVIDIA Blog.

Read More

AI Scorekeeper: Scotiabank Sharpens the Pencil in Credit Risk

AI Scorekeeper: Scotiabank Sharpens the Pencil in Credit Risk

Paul Edwards is helping carry the age-old business of giving loans into the modern era of AI.

Edwards started his career modeling animal behavior as a Ph.D. in numerical ecology. He left his lab coat behind to lead a group of data scientists at Scotiabank, based in Toronto, exploring how machine learning can improve predictions of credit risk.

The team believes machine learning can both make the bank more profitable and help more people who deserve loans get them. They aim to share later this year some of their techniques in hopes of nudging the broader industry forward.

Scorecards Evolve from Pencils to AI

The new tools are being applied to scorecards that date back to the 1950s when calculations were made with paper and pencil. Loan officers would rank applicants’ answers to standard questions, and if the result crossed a set threshold on the scorecard, the bank could grant the loan.

With the rise of computers, banks replaced physical scorecards with digital ones. Decades ago, they settled on a form of statistical modeling called a “weight of evidence logistic regression” that’s widely used today.

One of the great benefits of scorecards is they’re clear. Banks can easily explain their lending criteria to customers and regulators. That’s why in the field of credit risk, the scorecard is the gold standard for explainable models.

“We could make machine-learning models that are bigger, more complex and more accurate than a scorecard, but somewhere they would cross a line and be too big for me to explain to my boss or a regulator,” said Edwards.

Machine Learning Models Save Millions

So, the team looked for fresh ways to build scorecards with machine learning and found a technique called boosting.

They started with a single question on a tiny scorecard, then added one question at a time. They stopped when adding another question would make the scorecard too complex to explain or wouldn’t improve its performance.

The results were no harder to explain than traditional weight-of-evidence models, but often were more accurate.

“We’ve used boosting to build a couple decision models and found a few percent improvement over weight of evidence. A few percent at the scale of all the bank’s applicants means millions of dollars,” he said.

XGBoost Upgraded to Accelerate Scorecards

Edwards’ team understood the potential to accelerate boosting models because they had been using a popular library called XGBoost on an NVIDIA DGX system. The GPU-accelerated code was very fast, but lacked a feature required to generate scorecards, a key tool they needed to keep their models simple.

Griffin Lacey, a senior data scientist at NVIDIA, worked with his colleagues to identify and add the feature. It’s now part of XGBoost in RAPIDS, a suite of open-source software libraries for running data science on GPUs.

As a result, the bank can now generate scorecards 6x faster using a single GPU compared to what used to require 24 CPUs, setting a new benchmark for the bank. “It ended up being a fairly easy fix, but we could have never done it ourselves,” said Edwards.

GPUs speed up calculating digital scorecards and help the bank lift their accuracy while maintaining the models’ explainability. “When our models are more accurate people who are deserving of credit get the credit they need,” said Edwards.

Riding RAPIDS to the AI Age

Looking ahead, Edwards wants to leverage advances from the last few decades of machine learning to refresh the world of scorecards. For example, his team is working with NVIDIA to build a suite of Python tools for scorecards with features that will be familiar to today’s data scientists.

“The NVIDIA team is helping us pull RAPIDS tools into our workflow for developing scorecards, adding modern amenities like Python support, hyperparameter tuning and GPU acceleration,” Edwards said. “We think in six months we could have example code and recipes to share,” he added.

With such tools, banks could modernize and accelerate the workflow for building scorecards, eliminating the current practice of manually tweaking and testing their parameters. For example, with GPU-accelerated hyperparameter tuning, a developer can let a computer test 100,000 model parameters while she is having her lunch.

With a much bigger pool to choose from, banks could select scorecards for their accuracy, simplicity, stability or a balance of all these factors. This helps banks ensure their lending decisions are clear and reliable and that good customers get the loans they need.

Digging into Deep Learning

Data scientists at Scotiabank use their DGX system to handle multiple experiments simultaneously. They tune scorecards, run XGBoost and refine deep-learning models. “That’s really improved our workflow,” said Edwards.

“In a way, the best thing we got from buying that system was all the support we got afterwards,” he added, noting new and upcoming RAPIDS features.

Longer term, the team is exploring use of deep learning to more quickly identify customer needs. An experimental model for calculating credit risk already showed a 20 percent performance improvement over the best scorecard, thanks to deep learning.

In addition, an emerging class of generative models can create synthetic datasets that mimic real bank data but contain no information specific to customers. That may open a door to collaborations that speed the pace of innovation.

The work of Edwards’ team reflects the growing interest and adoption of AI in banking.

“Last year, an annual survey of credit risk departments showed every participating bank was at least exploring machine learning and many were using it day-to-day,” Edwards said.

The post AI Scorekeeper: Scotiabank Sharpens the Pencil in Credit Risk appeared first on The Official NVIDIA Blog.

Read More

NVIDIA and Oracle Advance AI in Cloud for Enterprises Globally

NVIDIA and Oracle Advance AI in Cloud for Enterprises Globally

AI is reshaping markets in extraordinary ways. Soon, every company will be in AI, and will need both speed and scale to power increasingly complex machine learning models.

Accelerating innovation for enterprises around the world, Oracle today announced general availability of bare-metal Oracle Cloud Infrastructure instances featuring the NVIDIA A100 Tensor Core GPU.

NVIDIA founder and CEO Jensen Huang, speaking during the Oracle Live digital launch of the new instance, said: “Oracle is where companies store their enterprise data. We’re going to be able to take this data with no friction at all, run it on Oracle Cloud Infrastructure, conduct data analytics and create data frames that are used for machine learning to learn how to create a predictive model. That model will recommend actions to help companies go faster and make smarter decisions at an unparalleled scale.”

Watch Jensen Huang and Oracle Cloud Infrastructure Executive Vice President Clay Magouyrk discuss AI in the enterprise at Oracle Live.

Hundreds of thousands of enterprises across a broad range of industries store their data in Oracle databases. All of that raw data is ripe for AI analysis with A100 instances running on Oracle Cloud Infrastructure to help companies uncover new business opportunities, understand customer sentiment and create products.

The new Oracle Cloud Infrastructure bare-metal BM.GPU4.8 instance offers eight 40GB NVIDIA A100 GPUs linked via high-speed NVIDIA NVLink direct GPU-to-GPU interconnects. With A100, the world’s most powerful GPU, the Oracle Cloud Infrastructure instance delivers performance gains of up to 6x for customers running diverse AI workloads across training, inference and data science. To power the most demanding applications, the new instance can also scale up with NVIDIA Mellanox networking to provide more than 500 A100 GPUs in a single instance.

NVIDIA Software Accelerates AI and HPC for Oracle Enterprises

Accelerated computing starts with a powerful processor, but software, libraries and algorithms are all essential to an AI ecosystem. Whether it’s computer graphics, simulations like fluid dynamics, genomics processing, or deep learning and data analytics, every field requires its own domain-specific software stack. Oracle is providing NVIDIA’s extensive domain-specific software through the NVIDIA NGC hub of cloud-native, GPU-optimized containers, models and industry-specific software development kits.

“The costs of machine learning are not just on the hardware side,” said Clay Magouyrk, executive vice president of Oracle Cloud Infrastructure. “It’s also about how quickly someone can get spun up with the right tools, how quickly they can get access to the right software. Everything is pre-tuned on these instances so that anybody can show up, rent these GPUs by the hour and get quickly started running machine learning on Oracle Cloud.”

Oracle will also be adding A100 to the Oracle Cloud Infrastructure Data Science platform and providing NVIDIA Deep Neural Network libraries through Oracle Cloud Marketplace to help data scientists run common machine learning and deep learning frameworks, Jupyter Notebooks and Python/R integrated development environments in minutes.

On-Demand Access to the World’s Leading AI Performance

The new Oracle instances make it possible for every enterprise to have access to the world’s most powerful computing in the cloud. A100 delivers up to 20x more peak AI performance than its predecessors with TF32 operations and sparsity technology running on third-generation Tensor Cores. The world’s largest 7nm processor, A100 is incredibly elastic and cost-effective.

The flexible performance of A100 and Mellanox RDMA over Converged Ethernet networking makes the new Oracle Cloud Infrastructure instance ideal for critical drug discovery research, improving customer service through conversational AI, and enabling designers to model and build safer products, to highlight a few examples.

AI Acceleration for Workloads of All Sizes, Companies in All Stages

New businesses can access the power of A100 performance through the NVIDIA Inception and Oracle for Startups accelerator programs, which provide free Oracle Cloud credits for NVIDIA A100 and V100 GPU instances, special pricing, invaluable networking and expertise, marketing opportunities and more.

Oracle will soon introduce virtual machine instances providing one, two or four A100 GPUs per VM, and provide heterogeneous cluster networks of up to 512 A100 GPUs featuring bare-metal A100 GPU instances blended with Intel CPUs. Enterprises interested in accelerating their workloads with Oracle’s new A100 instance can get started with Oracle Cloud Infrastructure on Sept. 30.

To learn more about accelerating AI on Oracle Cloud Infrastructure, join Oracle at GTC, Oct. 5-9.

The post NVIDIA and Oracle Advance AI in Cloud for Enterprises Globally appeared first on The Official NVIDIA Blog.

Read More

AI in the Hand of the Artist

AI in the Hand of the Artist

Humans are wielding AI to create art, and a virtual exhibit that’s part of NVIDIA’s GPU Technology Conference showcases the stunning results.

The AI Art Gallery at NVIDIA GTC features pieces by a broad collection of artists, developers and researchers from around the world who are using AI to push the limits of artistic expression.

When AI is introduced into the artistic process, the artist feeds the machine data and code, explains Heather Schoell, senior art director at NVIDIA, who curated the online exhibit.

Once the output reveals itself, it’s up to the artist to determine if it stands up to their artistic style and desired message, or if the input needs to be adjusted, according to Schoell.

“The output reflects both the artist’s hand and the medium, in this case data, used for creation,” Schoell says.

The exhibit complements what has become the world’s premier AI conference.

GTC, running Oct. 5-9, will bring together researchers from industry and academia, startups and Fortune 500 companies.

So it’s only natural that artists would be among those putting modern AI to work.

“Through this collection we aim to share how the artist can partner with AI as both an artistic medium and creative collaborator,” Schoell explains.

The artists featured in the AI Art Gallery include:

  • Daniel Ambrosi – Dreamscapes fuses computational photography and AI to create a deeply textural environment.
  • Refik AnadolMachine Hallucinations, by the Turkish-born, Los Angeles-based conceptual artist known for his immersive architectural digital installations, such as a project at New York’s Chelsea Market that used projectors to splash AI-generated images of New York cityscapes to create what Anadol called a “machine hallucination.”
  • Sofia Crespo and Dark Fractures – Work from the Argentina-born artist and Berlin-based studio led by Feileacan McCormick uses GANs and NLP models to generate 3D insects in a virtual, digital space.
  • Scott Eaton – An artist, educator and creative technologist residing in London, who combines a deep understanding of human anatomy, traditional art techniques and modern digital tools in his uncanny, figurative artworks.
  • Oxia Palus – The NVIDIA Inception startup will uncover a new masterpiece by Leonardo da Vinci that resurrects a hidden sketch and reconstructs the painting style from one of the most famous artists of all time.
  • Anna Ridler – Three displays showing images of tulips that change based on Bitcoin’s price, created by the U.K. artist and researcher known for her work exploring the intersection of machine learning, nature and history.
  • Helena Sarin – Using her own drawings, sketches and photographs as datasets, Sarin trains her models to generate new visuals that serve as the basis of her compositions — in this case with type of neural network known as a generative adversarial network, or GAN. The Moscow-born artist has embedded 12 of these creations in a book of puns on the acronym GAN.
  • Pindar Van Arman – Driven by a collection of algorithms programmed to work with — and against — one another, the U.S.-based artist and roboticist’s creation uses a paintbrush, paint and canvas to create portraits that fuse the look and feel of a photo and a handmade sketch.

For a closer look, registered GTC attendees can go on a live, personal tour of two of our featured artists’ studios.

On Thursday, Oct. 8, you can virtually tour Van Arman’s Fort Worth, Texas, studio between 11 a.m.-12 p.m. Pacific time. And at 2 p.m. Pacific, you can tour Refik Anadol’s Los Angeles studio.

In addition, a pair of panel discussions, Thursday, Oct. 8, with AI Gallery artists will explore what led them to connect AI and fine art.

And starting Oct. 5, you can tune in to an on-demand GTC session featuring Oxia Palus co-founder George Cann, a Ph.D. candidate in space and climate physics at University College London.

Join us at the AI Art Gallery.

Register for GTC

The post AI in the Hand of the Artist appeared first on The Official NVIDIA Blog.

Read More

Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE

Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE

One of the leading EV startups in China is charging up its compute capabilities.

Li Auto announced today it would develop its next generation of electric vehicles using the high-performance, energy-efficient NVIDIA DRIVE AGX Orin. These new vehicles will be developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

The startup has become a standout brand in China over the past year. Its electric model lineup has led domestic sales of medium and large SUVs for eight consecutive months. With this latest announcement, the automaker can extend its lead to the autonomous driving industry.

NVIDIA Orin, the SoC at the heart of the future fleet, achieves 200 TOPS — nearly 7x the performance and 3x the energy efficiency of our previous generation SoC — and is designed to handle the large number of applications and deep neural networks that run simultaneously for automated and autonomous driving. Orin is designed to achieve the systematic safety standards such as ISO 26262 ASIL-D.

This centralized, high-performance system will enable software-defined, intelligent features in Li Auto’s upcoming electric vehicles, making them a smart choice for eco-friendly, safe and convenient driving.

“By cooperating with NVIDIA, Li Auto can benefit from stronger performance and the energy-efficient compute power needed to deliver both advanced driving and fully autonomous driving solutions to market,” said Kai Wang, CTO of Li Auto.

A Software-Defined Architecture

Today, a vehicle’s software functions are powered by dozens of electronic control units, known as ECUs, that are distributed throughout the car. Each is specialized — one unit controls windows and one the door locks, for example, and others control power steering and braking.

This fixed-function architecture is not compatible with intelligent and autonomous features. These AI-powered capabilities are software-defined, meaning they are constantly improving, and require a hardware architecture that supports frequent upgrades.

Vehicles equipped with NVIDIA Orin have the powerful, centralized compute necessary for this software-defined architecture. The SoC was born out of the data center, built with approximately 17 billion transistors to handle the large number of applications and deep neural networks for autonomous systems and AI-powered cockpits.

The NVIDIA Orin SoC

This high-performance platform will enable Li Auto to become one of the first automakers in China to deploy an independent, advanced autonomous driving system with its next-generation fleet.

The Road Ahead

This announcement is just the first step of a long-term collaboration between NVIDIA and Li Auto.

“The next-generation NVIDIA Orin SoC offers a significant leap in compute performance and energy efficiency,” said Rishi Dhall, vice president of autonomous vehicles at NVIDIA. “NVIDIA works closely with companies like Li Auto to help bring new AI-based autonomous driving capabilities to cutting-edge EVs in China and around the globe.”

By combining NVIDIA’s leadership in AI software and computing with Li Auto’s momentum in the electric vehicle space, together, these companies will develop vehicles that are better for the environment and safer for everyone.

The post Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More

Meet the Maker: Mr. Fascinate Encourages Kids to Get on the Cool Bus and Study STEM

Meet the Maker: Mr. Fascinate Encourages Kids to Get on the Cool Bus and Study STEM

STEM is dope. That’s the simple message that Justin “Mr. Fascinate” Shaifer evangelizes to young people around the world.

Through social media and other platforms, Shaifer fascinates children with STEM projects — including those that can be created using AI with NVIDIA Jetson products — in hopes that more students from underrepresented groups will be inspired to dive into the field. NVIDIA Jetson embedded systems allow anyone to create their own AI-based projects.

Growing up on Chicago’s South Side, Shaifer didn’t know anyone with a career in STEM he could look up to — at least no one he could relate to. Now, he’s become that role model for thousands of kids, working to prove that STEM is cool and attainable for anyone who has a passion for it.

About the Maker

Shaifer is a STEM advocate, animator and TV host who educates students about the importance of STEM and diversity within it. He has a YouTube channel, gives keynote speeches and hosts the Escape Lab live science show on Twitch.

He’s also the founder of Fascinate Inc., a nonprofit with the mission of exciting underrepresented students about careers in STEM and providing schools and after-school programs with fun science curricula.

The organization also launched the Magic Cool Bus project, filling a real-life bus with cutting-edge tech gadgets and bringing it to schools so students can hop on board and explore.

Growing up in a single-parent home, Shaifer was fascinated by science, earning scholarships from NASA and NOAA that covered his expenses to study marine and environmental science at Hampton University. He’s currently working toward a Ph.D. in science education at Columbia University.

His Inspiration

Shaifer was inspired to transition from being a scientist in a lab to a science educator for others in 2017, while volunteering at a museum in Washington.

“I was freestyle rapping about a carbon cycle exhibit, and this nine-year-old Black kid came up to me and said, ‘What do you do, man?’” said Shaifer.

When Shaifer told him he was a scientist, the child said, “That’s so cool. When I grow up, I want to be a scientist just like you!”

“That made me reflect on the fact that at nine years old, I’d never seen an example of a scientist that looked like me,” said Shaifer. “I realized that students need to be exposed to a role model in STEM that they can identify with, at scale.”

Later that year, Shaifer founded Fascinate Inc.

His Favorite Jetson Projects

Shaifer is passionate about exposing students to the world of AI, and he says using NVIDIA Jetson platform is a great way to do so.

Watch him highlight Jetson products:

NVIDIA Jetson Xavier NX Unboxing and Impression

NVIDIA SparkFun JetBot AI Kit Unboxing and Impression

One of Shaifer’s favorite real-world applications that uses the NVIDIA Jetson Nano developer kit is Qrio. The bot, created by Agustinus Nalwan, recognizes a toddler’s toy and plays a relevant YouTube video.

“Especially since I work with young kids, I think that’s a really cool application that allows a child to be engaged, interactive and always learning as they play with their toys,” said Shaifer.

Where to Learn More 

Get fascinated by STEM on Shaifer’s website and YouTube channel.

Discover tools, inspiration and three easy steps to help kickstart your project with AI on our “Get AI, Learn AI, Build AI” page.

The post Meet the Maker: Mr. Fascinate Encourages Kids to Get on the Cool Bus and Study STEM appeared first on The Official NVIDIA Blog.

Read More

Top Healthcare Innovators Share AI Developments at GTC

Top Healthcare Innovators Share AI Developments at GTC

Healthcare is under the microscope this year like never before. Hospitals are being asked to do more with less, and researchers are working around the clock to answer pressing questions.

NVIDIA’s GPU Technology Conference brings everything you need to know about the future of AI and HPC in healthcare together in one place.

Innovators across healthcare will come together at the event to share how they are using AI and GPUs to supercharge their medical devices and biomedical research.

Scores of on-demand talks and hands-on training sessions will focus on AI in medical imaging, genomics, drug discovery, medical instruments and smart hospitals.

And advancements powered by GPU acceleration in fields such as imaging, genomics and drug discovery, which are playing a vital role in COVID-19 research, will take center stage at the conference.

There are over 120 healthcare sessions taking place at GTC, which will feature amazing demos, hands-on training, breakthrough research and more from October 5-9.

Turning Months into Minutes for Drug Discovery

AI and HPC are improving speed, accuracy and scalability for drug discovery. Companies and researchers are turning to AI to enhance current methods in the field. Molecular simulation like docking, free energy pertubation (FEP) and molecular dynamics requires a huge amount of computing power. At every phase of drug discovery, researchers are incorporating AI methods to accelerate the process.

Here are some drug discovery sessions you won’t want to miss:

Architecting the Next Generation of Hospitals

AI can greatly improve hospital efficiency and prevent costs from ballooning. Autonomous robots can help with surgeries, deliver blankets to patients’ rooms and perform automatic check-ins. AI systems can search patient records, monitor blood pressure and oxygen saturation levels, flag thoracic radiology images that show pneumonia, take patient temperatures and notify staff immediately of changes.

Here are some sessions on smart hospitals you won’t want to miss:

Training AI for Medical Imaging

AI models are being developed at a rapid pace to optimize medical imaging analysis for both radiology and pathology. Get exposure to cutting-edge use cases for AI in medical imaging and how developers can use the NVIDIA Clara Imaging application framework to deploy their own AI applications.

Building robust AI requires massive amounts of data. In the past, hospitals and medical institutions have struggled to share and combine their local knowledge without compromising patient privacy, but federated learning is making this possible. The learning paradigm enables different clients to securely collaborate, train and contribute to a global model. Register for this session to learn more about federated learning and its use on AI COVID-19 model development from a panel of experts.

Must-see medical imaging sessions include:

Accelerating Genomic Analysis

Genomic data is foundational in making precision medicine a reality. As next-generation sequencing becomes more routine, large genomic datasets are becoming more prevalent. Transforming the sequencing data into genetic information is just the first step in a complicated, data-intensive workflow. With high performance computing, genomic analysis is being streamlined and accelerated to enable novel discoveries about the human genome.

Genomic sessions you won’t want to miss include:

The Best of MICCAI at GTC

This year’s GTC is also bringing to attendees the best of MICCAI, a conference focused on cutting-edge deep learning medical imaging research. Developers will have the opportunity to dive into the papers presented, connect with the researchers at a variety of networking opportunities, and watch on-demand trainings from the first ever MONAI Bootcamp hosted at MICCAI.

Game-Changing Healthcare Startups

Over 70 healthcare AI startups from the NVIDIA Inception program will showcase their latest breakthroughs at GTC. Get inspired by the AI- and HPC-powered technologies these startups are developing for personalized medicine and next-generation clinics.

Here are some Inception member-led talks not to miss:

Make New Connections, Share Ideas

GTC will have new ways to connect with fellow attendees who are blazing the trail for healthcare and biomedical innovation. Join a Dinner with Strangers conversation to network with peers on topics spanning drug discovery, medical imaging, genomics and intelligent instrument development. Or, book a Braindate to have a knowledge-sharing conversation on a topic of your choice with a small group or one-on-one.

Learn more about networking opportunities at GTC.

Brilliant Minds Never Turn Off

GTC will showcase the hard work and groundbreaking discoveries of developers, researchers, engineers, business leaders and technologists from around the world. Nowhere else can you access five days of continuous programming with regionally tailored content. This international event will unveil the future of healthcare technology, all in one place.

Check out the full healthcare session lineup at GTC, including talks from over 80 startups using AI to transform healthcare, and register for the event today.

The post Top Healthcare Innovators Share AI Developments at GTC appeared first on The Official NVIDIA Blog.

Read More