AI in Schools: Sony Reimagines Remote Learning with Artificial Intelligence

AI in Schools: Sony Reimagines Remote Learning with Artificial Intelligence

Back to school was destined to look different this year.

With the world adapting to COVID-19, safety measures are preventing a return to in-person teaching in many places. Also, students learning through conventional video conferencing systems often feel the content is difficult to read, or teachers block the words written on presentation boards.

Faced with these challenges, educators at Prefectural University of Hiroshima in Japan envisioned a high-quality remote learning system with additional features not possible with traditional video conferencing.

They chose a distance-learning solution from Sony that links lecturers and students across their three campuses. It uses AI to make it easy for presenters anywhere to engage their audiences and impart information using captivating video. Thanks to these innovations, lecturers at Prefectural University can now teach students simultaneously on three campuses linked by a secure virtual private network.

Sony remote learning solution
Sony’s remote learning solution in action, with Edge Analytics Appliance, remote cameras and projectors.

AI Helps Lecturers Get Smarter About Remote Learning

At the heart of Prefectural’s distance learning system is Sony’s REA-C1000 Edge Analytics Appliance, which was developed using the NVIDIA Jetson Edge AI platform. The appliance lets teachers and speakers quickly create dynamic video presentations without using expensive video production gear or learning sophisticated software applications.

Sony’s exclusive AI algorithms run inside the appliance. These deep learning models employ techniques such as automatic tracking, zooming and cropping to allow non-specialists to produce engaging, professional-quality video in real time.

Users simply connect the Edge Analytics Appliance to a camera that can pan, tilt and zoom; a PC; and a display or recording device. In Prefectural’s case, multiple cameras capture what a lecturer writes on the board, questions and contributions from students, and up to full HD images depending on the size of the lecture hall.

Managing all of this technology is made simple for the lecturers. A touchscreen panel facilitates intuitive operation of the system without the need for complex adjustment of camera settings.

Sony remote learning solution

Teachers Achieve New Levels of Transparency

One of the landmark applications in the Edge Analytics Appliance is handwriting extraction, which lets students experience lectures more fully, rather than having to jot down notes.

The application uses a camera to record text and figures as an instructor writes them by hand on a whiteboard or blackboard, and then immediately draws them as if they are floating in front of the instructor.

Students viewing the lecture live from a remote location or from a recording afterward can see and recognize the text and diagrams, even if the original handwriting is unclear or hidden by the instructor’s body. The combined processing power of the compact, energy-efficient Jetson TX2 and Sony’s moving/unmoving object detection technology makes the transformation from the board to the screen seamless.

Handwriting extraction is also customizable: the transparency of the floating text and figures can be adjusted, so that characters that are faint or hard to read can be highlighted in color, making them more legible — and even more so than the original content written on the board.

Create Engaging Content Without Specialist Resources

 

Another innovative application is Chroma key-less CG overlay, using state-of-the-art algorithms from Sony, like moving-object detection, to produce class content without the need for large-scale video editing equipment.

Like a personal greenscreen for presenters, the application seamlessly places the speaker in front of any animations, diagrams or graphs being presented.

Previously, moving-object detection algorithms required for this kind of compositing could only be run on professional workstations. With Jetson TX2, Sony was able to include this powerful deep learning-based feature within the compact, simple design of the Edge Analytics Appliance.

A Virtual Camera Operator

Numerous additional algorithms within the appliance include those for color-pattern matching, shape recognition, pose recognition and more. These enable features such as:

  • PTZ Auto Tracking — automatically tracks an instructor’s movements and ensures they stay in focus.
  • Focus Area Cropping — crops a specified portion from a video recorded on a single camera and creates effects as if the cropped portion were recorded on another camera. This can be used to generate, for example, a picture-in-picture effect, where an audience can simultaneously see a close-up of the presenter speaking against a wide shot of the rest of the stage.
  • Close Up by Gesture — automatically zooms in on and records students or audience members who stand up in preparation to ask a question.

With the high-performance Jetson platform, the Edge Analytics Appliance can easily handle a wide range of applications like these. The result is like a virtual camera operator that allows people to create engaging, professional-looking video presentations without the expertise or expense previously required to do so.

Officials at Prefectural University of Hiroshima say the new distance learning initiative has already led to greater student and teacher satisfaction with remote learning. Linking the university’s three campuses through the system is also fostering a sense of unity among the campuses.

“We chose Sony’s Edge Analytics Appliance for our new distance learning design because it helps us realize a realistic and comfortable learning environment for students by clearly showing the contents on the board and encouraging discussion. It was also appealing as a cost-effective solution as teachers can simply operate without additional staff,” said Kyousou Kurisu, director of public university corporation, Prefectural University of Hiroshima.

Sony plans to continually update applications available on the Edge Analytics Appliance. So, like any student, the system will only get better over time.

The post AI in Schools: Sony Reimagines Remote Learning with Artificial Intelligence appeared first on The Official NVIDIA Blog.

Read More

Whether It’s Rembrandt or Toilets, ‘Curiosity About How Things Work’ Is Key to Innovation, CGI Legend Pat Hanrahan Says

Whether It’s Rembrandt or Toilets, ‘Curiosity About How Things Work’ Is Key to Innovation, CGI Legend Pat Hanrahan Says

You may have never heard of Pat Hanrahan, but you have almost certainly seen his work.

His list of credits includes three Academy Awards, and his work on Pixar’s RenderMan rendering technology enabled Hollywood megahits Toy Story, Finding Nemo, Cars and Jurassic Park.

Hanrahan also founded Tableau Software — snatched up by Salesforce last year for nearly $16 billion — and has mentored countless technology companies as a Stanford professor.

Hanrahan is the most recent winner of the Turing Award, along with his longtime friend and collaborator Ed Catmull, a former president at Pixar and Disney Animation Studios. The award — a Nobel Prize, of sorts, in computer science —  was for their work in 3D computer graphics and computer-generated imagery.

He spoke Thursday at NTECH, NVIDIA’s annual internal engineering conference. The digital event was followed by a virtual chat between NVIDIA CEO Jensen Huang and Hanrahan, who taught a computer graphics course at NVIDIA’s Silicon Valley campus during its early days.

While the theme of his address was “You Can Be an Innovator,” the main takeaway is that a “curiosity about how things work” is a prerequisite.

Hanrahan said his own curiosity for art and studying how Rembrandt painted flesh tones led to a discovery. Artists of that Baroque period, he said, applied a technique in oil painting with layers, called impasto, for depth of skin tone. This led to his own deeper study of light’s interaction with translucent surfaces.

“Artists, they sort of instinctively figured it out,” he said. “They don’t know about the physics of light transport. Inspired by this whole idea of Rembrandt’s, I came up with a mathematical model.”

Hanrahan said innovative people need to be instinctively curious. He tested that out himself when interviewing job candidates in the early days of Pixar. “I asked everybody that I wanted to hire into the engineering team, ‘How does a toilet work?’ To be honest, most people did not know how their toilet worked,” he said, “and these were engineers.”

At the age of seven 7, he’d already lifted the back cover of the toilet to find out what made it work.

Hanrahan worked with Steve Jobs at Pixar. Jobs’s curiosity and excitement about touch-capacitive sensors — technology that dated back to the 1970s — would eventually lead to the touch interface of the iPhone, he said.

After the talk, Huang joined the video feed from his increasingly familiar kitchen at home and interviewed Hanrahan. The wide-ranging conversation was like a time machine, with questions and reminisces looking back 20 years and discussions peering forward to the next 20.

The post Whether It’s Rembrandt or Toilets, ‘Curiosity About How Things Work’ Is Key to Innovation, CGI Legend Pat Hanrahan Says appeared first on The Official NVIDIA Blog.

Read More

New Earth Simulator to Take on Planet’s Biggest Challenges

New Earth Simulator to Take on Planet’s Biggest Challenges

A new supercomputer under construction is designed to tackle some of the planet’s toughest life sciences challenges by speedily crunching vast quantities of environmental data.

The Japan Agency for Marine-Earth Science and Technology, or JAMSTEC, has commissioned tech giant NEC to build the fourth generation of its Earth Simulator. The new system, scheduled to become operational in March, will be based around SX-Aurora TSUBASA vector processors from NEC and NVIDIA A100 Tensor Core GPUs, all connected with NVIDIA Mellanox HDR 200Gb/s InfiniBand networking.

This will give it a maximum theoretical performance of 19.5 petaflops, putting it in the highest echelons of the TOP500 supercomputer ratings.

The new system will benefit from a multi-architecture design, making it suited to various research and development projects in the earth sciences field. In particular, it will act as an execution platform for efficient numerical analysis and information creation, coordinating data relating to the global environment.

Its work will span marine resources, earthquakes and volcanic activity. Scientists will gain deeper insights into cause-and-effect relationships in areas such as crustal movement and earthquakes.

The Earth Simulator will be deployed to predict and mitigate natural disasters, potentially minimizing loss of life and damage in the event of another natural disaster like the earthquake and tsunami that hit Japan in 2011.

Earth Simulator will achieve this by running large-scale simulations at high speed in ways previous generations of Earth Simulator couldn’t. The intent is also to have the system play a role in helping governments develop a sustainable socio-economic system.

The new Earth Simulator promises to deliver a multitude of vital environmental information. It also represents a quantum leap in terms of its own environmental footprint.

Earth Simulator 3, launched in 2015, offered a performance of 1.3 petaflops. It was a world beater at the time, outstripping Earth Simulators 1 and 2, launched in 2002 and 2009, respectively.

The fourth-generation model will deliver more than 15x the performance of its predecessor, while keeping the same level of power consumption and requiring around half the footprint. It’s able to achieve these feats thanks to major research and development efforts from NVIDIA and NEC.

The latest processing developments are also integral to the Earth Simulator’s ability to keep up with rising data levels.

Scientific applications used for earth and climate modelling are generating increasing amounts of data that require the most advanced computing and network acceleration to give researchers the power they need to simulate and predict our world.

NVIDIA Mellanox HDR 200Gb/s InfiniBand networking with in-network compute acceleration engines combined with NVIDIA A100 Tensor Core GPUs and NEC SX-Aurora TSUBASA provides JAMSTEC a world-leading marine research platform critical for expanding earth and climate science and accelerating discoveries.

The post New Earth Simulator to Take on Planet’s Biggest Challenges appeared first on The Official NVIDIA Blog.

Read More

Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim

Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim

When it comes to autonomous vehicle simulation testing, every detail must be on point.

With its high-fidelity automotive simulation model (ASM) on NVIDIA DRIVE Sim, global automotive supplier dSPACE is helping developers keep virtual self-driving true to the real world. By combining the modularity and openness of the DRIVE Sim simulation software platform with highly accurate vehicle models like dSPACE’s, every minor aspect of an AV can be thoroughly recreated, tested and validated.

The dSPACE ASM vehicle dynamics model makes it possible to simulate elements of the car — suspension, tires, brakes — all the way to the full vehicle powertrain and its interaction with the electronic control units that power actions such as steering, braking and acceleration.

As the world continues to work from home, simulation has become an even more crucial tool in autonomous vehicle development. However, to be effective, it must be able to translate to real-world driving.

dSPACE’s modeling capabilities are key to understanding vehicle behavior in diverse conditions, enabling the exhaustive and high-fidelity testing required for safe self-driving deployment.

Detailed Validation

High-fidelity simulation is more than just a realistic-looking car driving in a recreated traffic scenario. It means in any given situation, the simulated vehicle will behave just as a real vehicle driving in the real world would.

If an autonomous vehicle suddenly brakes on a wet road, there are a range of forces that affect how and where the vehicle stops. It could slide further than intended or fishtail, depending on the weather and road conditions. These possibilities require the ability to simulate dynamics such as friction and yaw, or the way the vehicle moves vertically.

The dSPACE ASM vehicle dynamics model includes these factors, which can then be compared with a real vehicle in the same scenario. It also tests how the same model acts in different simulation environments, ensuring consistency with both on-road driving and virtual fleet testing.

A Comprehensive and Diverse Platform

The NVIDIA DRIVE Sim platform taps into the computing horsepower of NVIDIA RTX GPUs to deliver a revolutionary, scalable, cloud-based computing platform, capable of generating billions of qualified miles for autonomous vehicle testing.

It’s open, meaning both users and partners can incorporate their own models in simulation for comprehensive and diverse driving scenarios.

dSPACE chose to integrate its vehicle dynamics ASM with DRIVE Sim due to its ability to scale for a wide range of testing conditions. When running on the NVIDIA DRIVE Constellation platform, it can perform both software-in-the-loop and hardware-in-the-loop testing, which includes the in-vehicle AV computer controlling the vehicle in the simulation process. dSPACE’s broad expertise and long track-record in hardware-in-the-loop simulation make for a seamless implementation of ASM on DRIVE Constellation.

Learn more about the dSPACE ASM vehicle dynamics in the DRIVE Sim platform at the company’s upcoming GTC sessionregister before Sept. 25 to receive Early Bird pricing.

The post Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim appeared first on The Official NVIDIA Blog.

Read More

Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst

Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst

Jeff Herbst is a fixture of the AI startup ecosystem. Which makes sense since he’s the VP of business development at NVIDIA and head of NVIDIA Inception, a virtual accelerator that currently has over 6,000 members in a wide range of industries.

Ahead of the GPU Technology Conference, taking place Oct. 5-9, Herbst joined AI Podcast host Noah Kravitz to talk about what opportunities are available to startups at the conference, and how NVIDIA Inception is accelerating startups in every industry.

Herbst, who now has almost two decades at NVIDIA under his belt, studied computer graphics at Brown University and later became a partner at a Silicon Valley premier technology law firm. He’s served as a board member and observer for dozens of startups over his career.

On the podcast, he provides his perspective on the future of the NVIDIA Inception program. As AI continues to expand into every industry, Herbst predicts that more and more startups will incorporate GPU computing.

Those interested can learn more through NVIDIA Inception programming at GTC, which will bring together the world’s leading AI startups and venture capitalists. They’ll participate in activities such as the NVIDIA Inception Premier Showcase, where some of the most innovative AI startups in North America will present, and a fireside chat with Herbst, NVIDIA founder and CEO Jensen Huang, and several CEOs of AI startups.

Key Points From This Episode:

  • Herbst’s interest in supporting an AI startup ecosystem began in 2008 at the NVISION Conference — the precursor to GTC. The conference held an Emerging Company Summit, which brought together startups, reporters and VCs, and made Herbst realize that there were many young companies using GPU computing that could benefit from NVIDIA’s support.
  • Herbst provides listeners with an insider’s perspective on how NVIDIA expanded from computer graphics to the cutting edge of AI and accelerated computing, describing how it was clear from his first days at the company that NVIDIA envisioned a future where GPUs were essential to all industries.

Tweetables:

“We love startups. Startups are the future, especially when you’re working with a new technology like GPU computing and AI” — Jeff Herbst [14:06]

“NVIDIA is a horizontal platform company — we build this amazing platform on which other companies, particularly software companies, can build their businesses” — Jeff Herbst [27:49]

You Might Also Like

AI Startup Brings Computer Vision to Customer Service

When your appliances break, the last thing you want to do is spend an hour on the phone trying to reach a customer service representative. Using computer vision, Drishyam.AI analyzes the issue and communicates directly with manufacturers, rather than going through retail outlets.

How Vincent AI Uses a Generative Adversarial Network to Let You Sketch Like Picasso

If you’ve only ever been able to draw stick figures, this is the application for you. Vincent AI turns scribbles into a work of art inspired by one of seven artistic masters. Listen in to hear from Monty Barlow, machine learning director for Cambridge Consultants — the technology development house behind the app.

A USB Port for Your Body? Startup Uses AI to Connect Medical Devices to Nervous System

Think of it as a USB port for your body. Emil Hewage is the co-founder and CEO at Cambridge Bio-Augmentation Systems, a neural engineering startup. The UK startup is building interfaces that use AI to help plug medical devices into our nervous systems.

The post Inception: Exploring the AI Startup Ecosystem with NVIDIA’s Jeff Herbst appeared first on The Official NVIDIA Blog.

Read More

Surfing Gravity’s Waves: HPC+AI Hang a Cosmic Ten

Surfing Gravity’s Waves: HPC+AI Hang a Cosmic Ten

Eliu Huerta is harnessing AI and high performance computing (HPC) to observe the cosmos more clearly.

For several years, the astrophysics researcher has been chipping away at a grand challenge, using data to detect signals produced by collisions of black holes and neutron stars. If his next big design for a neural network is successful, astrophysicists will use it to find more black holes and study them in more detail than ever.

Such insights could help answer fundamental questions about the universe. They may even add a few new pages to the physics textbook.

Huerta studies gravitational waves, the echoes from dense stellar remnants that collided long ago and far away. Since Albert Einstein first predicted them in his theory of relativity, academics debated whether these ripples in the space-time fabric really exist.

Researchers ended the debate in 2015 when they observed gravitational waves for the first time. They used pattern-matching techniques on data from the Laser Interferometer Gravitational-Wave Observatory (LIGO), home to some of the most sensitive instruments in science.

Detecting Black Holes Faster with AI

Confirming the presence of just one collision took a supercomputer to process data the instruments could gather in a single day. In 2017, Huerta’s team showed how a deep neural network running on an NVIDIA GPU could find gravitational waves with the same accuracy in a fraction of the time.

“We were orders of magnitude faster and we could even see signals the traditional techniques missed and we did not train our neural net for,” said Huerta, who leads AI and gravity groups at the National Center for Supercomputing Applications at the University of Illinois, Urbana-Champaign.

The AI model Huerta used was based on data from tens of thousands of waveforms. He trained it on a single NVIDIA GPU in less than three hours.

Seeing in Detail How Black Holes Spin

This year, Huerta and two of his students created a more sophisticated neural network that can detect how two colliding black holes spin. Their AI model even accurately measured the faint signals of a small black hole when it was merging with a larger one.

It required data on 1.5 million waveforms. An IBM POWER9-based system with 64 NVIDIA V100 Tensor Core GPUs took 12 hours to train the resulting neural network.

To accelerate their work, Huerta’s team got access to 1,536 V100 GPUs on 256 nodes of the IBM AC922 Summit supercomputer at Oak Ridge National Laboratory.

Taking advantage of NVIDIA NVLink, a connection between Summit’s GPUs and its IBM POWER9 CPUs, they trained the AI model in just 1.2 hours.

The results, described in a paper in Physics Letters B, “show how the combination of AI and HPC can solve grand challenges in astrophysics,” he said.

Interestingly, the team’s work is based on WaveNet, a popular AI model for converting text-to-speech. It’s one of many examples of how AI technology that’s rapidly evolving in consumer and enterprise use cases is crossing over to serve the needs of cutting-edge science.

The Next Big Leap into Black Holes

So far, Huerta has used data from supercomputer simulations to detect and describe the primary characteristics of gravitational waves. Over the next year, he aims to use actual LIGO data to capture the more nuanced secondary characteristics of gravitational waves.

“It’s time to go beyond low-hanging fruit and show the combination of HPC and AI can address production-scale problems in astrophysics that neither approach can accomplish separately,” he said.

The new details could help scientists determine more accurately where black holes collided. Such information could help them more accurately calculate the Hubble constant, a measure of how fast the universe is expanding.

The work may require tracking as many as 200 million waveforms, generating training datasets 100x larger than Huerta’s team used so far. The good news is, as part of their July paper, they’ve already determined their algorithms can scale to at least 1,024 nodes on Summit.

Tallying Up the Promise of HPC+AI

Huerta believes he’s just scratching the surface of the promise of HPC+AI. “The datasets will continue to grow, so to run production algorithms you need to go big, there’s no way around that,” he said.

Meanwhile, use of AI is expanding to adjacent areas. The team used neural nets to classify the many, many galaxies found in electromagnetic surveys of the sky, work NVIDIA CEO Jensen Huang highlighted in his GTC keynote in May.

Separately, one of Huerta’s grad students used AI to describe the turbulence when neutron stars merge more efficiently than previous techniques. “It’s another place where we can go into the traditional software stack scientists use and replace an existing model with an accelerated neural network,” Huerta said.

To accelerate the adoption of its work, the team has released as open source code its AI models for cosmology and gravitational wave astrophysics.

“When people read these papers they may think it’s too good to be true, so we let them convince themselves that we are getting the results we reported,” he said.

The Road to Space Started at Home

As is often the case with landmark achievements, there’s a parent to thank.

“My dad was an avid reader. We spent lots of time together doing math and reading books on a wide range of topics,” Huerta recalled.

“When I was 13, he brought home The Meaning of Relativity by Einstein. It was way over my head, but a really interesting read.

“A year or so later he bought A Brief History of Time by Stephen Hawking. I read it and thought it would be great to go to Cambridge and learn about gravity. Years later that actually happened,” he said.

The rest is a history that Huerta is still writing.

For more on Huerta’s work, check on an article from Oak Ridge National Laboratory.

At top: An artist’s impression of gravitational waves generated by binary neutron stars. Credit: R. Hurt, Caltech/NASA Jet Propulsion Lab

The post Surfing Gravity’s Waves: HPC+AI Hang a Cosmic Ten appeared first on The Official NVIDIA Blog.

Read More

AI Scorekeeper: Scotiabank Sharpens the Pencil in Credit Risk

AI Scorekeeper: Scotiabank Sharpens the Pencil in Credit Risk

Paul Edwards is helping carry the age-old business of giving loans into the modern era of AI.

Edwards started his career modeling animal behavior as a Ph.D. in numerical ecology. He left his lab coat behind to lead a group of data scientists at Scotiabank, based in Toronto, exploring how machine learning can improve predictions of credit risk.

The team believes machine learning can both make the bank more profitable and help more people who deserve loans get them. They aim to share later this year some of their techniques in hopes of nudging the broader industry forward.

Scorecards Evolve from Pencils to AI

The new tools are being applied to scorecards that date back to the 1950s when calculations were made with paper and pencil. Loan officers would rank applicants’ answers to standard questions, and if the result crossed a set threshold on the scorecard, the bank could grant the loan.

With the rise of computers, banks replaced physical scorecards with digital ones. Decades ago, they settled on a form of statistical modeling called a “weight of evidence logistic regression” that’s widely used today.

One of the great benefits of scorecards is they’re clear. Banks can easily explain their lending criteria to customers and regulators. That’s why in the field of credit risk, the scorecard is the gold standard for explainable models.

“We could make machine-learning models that are bigger, more complex and more accurate than a scorecard, but somewhere they would cross a line and be too big for me to explain to my boss or a regulator,” said Edwards.

Machine Learning Models Save Millions

So, the team looked for fresh ways to build scorecards with machine learning and found a technique called boosting.

They started with a single question on a tiny scorecard, then added one question at a time. They stopped when adding another question would make the scorecard too complex to explain or wouldn’t improve its performance.

The results were no harder to explain than traditional weight-of-evidence models, but often were more accurate.

“We’ve used boosting to build a couple decision models and found a few percent improvement over weight of evidence. A few percent at the scale of all the bank’s applicants means millions of dollars,” he said.

XGBoost Upgraded to Accelerate Scorecards

Edwards’ team understood the potential to accelerate boosting models because they had been using a popular library called XGBoost on an NVIDIA DGX system. The GPU-accelerated code was very fast, but lacked a feature required to generate scorecards, a key tool they needed to keep their models simple.

Griffin Lacey, a senior data scientist at NVIDIA, worked with his colleagues to identify and add the feature. It’s now part of XGBoost in RAPIDS, a suite of open-source software libraries for running data science on GPUs.

As a result, the bank can now generate scorecards 6x faster using a single GPU compared to what used to require 24 CPUs, setting a new benchmark for the bank. “It ended up being a fairly easy fix, but we could have never done it ourselves,” said Edwards.

GPUs speed up calculating digital scorecards and help the bank lift their accuracy while maintaining the models’ explainability. “When our models are more accurate people who are deserving of credit get the credit they need,” said Edwards.

Riding RAPIDS to the AI Age

Looking ahead, Edwards wants to leverage advances from the last few decades of machine learning to refresh the world of scorecards. For example, his team is working with NVIDIA to build a suite of Python tools for scorecards with features that will be familiar to today’s data scientists.

“The NVIDIA team is helping us pull RAPIDS tools into our workflow for developing scorecards, adding modern amenities like Python support, hyperparameter tuning and GPU acceleration,” Edwards said. “We think in six months we could have example code and recipes to share,” he added.

With such tools, banks could modernize and accelerate the workflow for building scorecards, eliminating the current practice of manually tweaking and testing their parameters. For example, with GPU-accelerated hyperparameter tuning, a developer can let a computer test 100,000 model parameters while she is having her lunch.

With a much bigger pool to choose from, banks could select scorecards for their accuracy, simplicity, stability or a balance of all these factors. This helps banks ensure their lending decisions are clear and reliable and that good customers get the loans they need.

Digging into Deep Learning

Data scientists at Scotiabank use their DGX system to handle multiple experiments simultaneously. They tune scorecards, run XGBoost and refine deep-learning models. “That’s really improved our workflow,” said Edwards.

“In a way, the best thing we got from buying that system was all the support we got afterwards,” he added, noting new and upcoming RAPIDS features.

Longer term, the team is exploring use of deep learning to more quickly identify customer needs. An experimental model for calculating credit risk already showed a 20 percent performance improvement over the best scorecard, thanks to deep learning.

In addition, an emerging class of generative models can create synthetic datasets that mimic real bank data but contain no information specific to customers. That may open a door to collaborations that speed the pace of innovation.

The work of Edwards’ team reflects the growing interest and adoption of AI in banking.

“Last year, an annual survey of credit risk departments showed every participating bank was at least exploring machine learning and many were using it day-to-day,” Edwards said.

The post AI Scorekeeper: Scotiabank Sharpens the Pencil in Credit Risk appeared first on The Official NVIDIA Blog.

Read More

NVIDIA and Oracle Advance AI in Cloud for Enterprises Globally

NVIDIA and Oracle Advance AI in Cloud for Enterprises Globally

AI is reshaping markets in extraordinary ways. Soon, every company will be in AI, and will need both speed and scale to power increasingly complex machine learning models.

Accelerating innovation for enterprises around the world, Oracle today announced general availability of bare-metal Oracle Cloud Infrastructure instances featuring the NVIDIA A100 Tensor Core GPU.

NVIDIA founder and CEO Jensen Huang, speaking during the Oracle Live digital launch of the new instance, said: “Oracle is where companies store their enterprise data. We’re going to be able to take this data with no friction at all, run it on Oracle Cloud Infrastructure, conduct data analytics and create data frames that are used for machine learning to learn how to create a predictive model. That model will recommend actions to help companies go faster and make smarter decisions at an unparalleled scale.”

Watch Jensen Huang and Oracle Cloud Infrastructure Executive Vice President Clay Magouyrk discuss AI in the enterprise at Oracle Live.

Hundreds of thousands of enterprises across a broad range of industries store their data in Oracle databases. All of that raw data is ripe for AI analysis with A100 instances running on Oracle Cloud Infrastructure to help companies uncover new business opportunities, understand customer sentiment and create products.

The new Oracle Cloud Infrastructure bare-metal BM.GPU4.8 instance offers eight 40GB NVIDIA A100 GPUs linked via high-speed NVIDIA NVLink direct GPU-to-GPU interconnects. With A100, the world’s most powerful GPU, the Oracle Cloud Infrastructure instance delivers performance gains of up to 6x for customers running diverse AI workloads across training, inference and data science. To power the most demanding applications, the new instance can also scale up with NVIDIA Mellanox networking to provide more than 500 A100 GPUs in a single instance.

NVIDIA Software Accelerates AI and HPC for Oracle Enterprises

Accelerated computing starts with a powerful processor, but software, libraries and algorithms are all essential to an AI ecosystem. Whether it’s computer graphics, simulations like fluid dynamics, genomics processing, or deep learning and data analytics, every field requires its own domain-specific software stack. Oracle is providing NVIDIA’s extensive domain-specific software through the NVIDIA NGC hub of cloud-native, GPU-optimized containers, models and industry-specific software development kits.

“The costs of machine learning are not just on the hardware side,” said Clay Magouyrk, executive vice president of Oracle Cloud Infrastructure. “It’s also about how quickly someone can get spun up with the right tools, how quickly they can get access to the right software. Everything is pre-tuned on these instances so that anybody can show up, rent these GPUs by the hour and get quickly started running machine learning on Oracle Cloud.”

Oracle will also be adding A100 to the Oracle Cloud Infrastructure Data Science platform and providing NVIDIA Deep Neural Network libraries through Oracle Cloud Marketplace to help data scientists run common machine learning and deep learning frameworks, Jupyter Notebooks and Python/R integrated development environments in minutes.

On-Demand Access to the World’s Leading AI Performance

The new Oracle instances make it possible for every enterprise to have access to the world’s most powerful computing in the cloud. A100 delivers up to 20x more peak AI performance than its predecessors with TF32 operations and sparsity technology running on third-generation Tensor Cores. The world’s largest 7nm processor, A100 is incredibly elastic and cost-effective.

The flexible performance of A100 and Mellanox RDMA over Converged Ethernet networking makes the new Oracle Cloud Infrastructure instance ideal for critical drug discovery research, improving customer service through conversational AI, and enabling designers to model and build safer products, to highlight a few examples.

AI Acceleration for Workloads of All Sizes, Companies in All Stages

New businesses can access the power of A100 performance through the NVIDIA Inception and Oracle for Startups accelerator programs, which provide free Oracle Cloud credits for NVIDIA A100 and V100 GPU instances, special pricing, invaluable networking and expertise, marketing opportunities and more.

Oracle will soon introduce virtual machine instances providing one, two or four A100 GPUs per VM, and provide heterogeneous cluster networks of up to 512 A100 GPUs featuring bare-metal A100 GPU instances blended with Intel CPUs. Enterprises interested in accelerating their workloads with Oracle’s new A100 instance can get started with Oracle Cloud Infrastructure on Sept. 30.

To learn more about accelerating AI on Oracle Cloud Infrastructure, join Oracle at GTC, Oct. 5-9.

The post NVIDIA and Oracle Advance AI in Cloud for Enterprises Globally appeared first on The Official NVIDIA Blog.

Read More

AI in the Hand of the Artist

AI in the Hand of the Artist

Humans are wielding AI to create art, and a virtual exhibit that’s part of NVIDIA’s GPU Technology Conference showcases the stunning results.

The AI Art Gallery at NVIDIA GTC features pieces by a broad collection of artists, developers and researchers from around the world who are using AI to push the limits of artistic expression.

When AI is introduced into the artistic process, the artist feeds the machine data and code, explains Heather Schoell, senior art director at NVIDIA, who curated the online exhibit.

Once the output reveals itself, it’s up to the artist to determine if it stands up to their artistic style and desired message, or if the input needs to be adjusted, according to Schoell.

“The output reflects both the artist’s hand and the medium, in this case data, used for creation,” Schoell says.

The exhibit complements what has become the world’s premier AI conference.

GTC, running Oct. 5-9, will bring together researchers from industry and academia, startups and Fortune 500 companies.

So it’s only natural that artists would be among those putting modern AI to work.

“Through this collection we aim to share how the artist can partner with AI as both an artistic medium and creative collaborator,” Schoell explains.

The artists featured in the AI Art Gallery include:

  • Daniel Ambrosi – Dreamscapes fuses computational photography and AI to create a deeply textural environment.
  • Refik AnadolMachine Hallucinations, by the Turkish-born, Los Angeles-based conceptual artist known for his immersive architectural digital installations, such as a project at New York’s Chelsea Market that used projectors to splash AI-generated images of New York cityscapes to create what Anadol called a “machine hallucination.”
  • Sofia Crespo and Dark Fractures – Work from the Argentina-born artist and Berlin-based studio led by Feileacan McCormick uses GANs and NLP models to generate 3D insects in a virtual, digital space.
  • Scott Eaton – An artist, educator and creative technologist residing in London, who combines a deep understanding of human anatomy, traditional art techniques and modern digital tools in his uncanny, figurative artworks.
  • Oxia Palus – The NVIDIA Inception startup will uncover a new masterpiece by Leonardo da Vinci that resurrects a hidden sketch and reconstructs the painting style from one of the most famous artists of all time.
  • Anna Ridler – Three displays showing images of tulips that change based on Bitcoin’s price, created by the U.K. artist and researcher known for her work exploring the intersection of machine learning, nature and history.
  • Helena Sarin – Using her own drawings, sketches and photographs as datasets, Sarin trains her models to generate new visuals that serve as the basis of her compositions — in this case with type of neural network known as a generative adversarial network, or GAN. The Moscow-born artist has embedded 12 of these creations in a book of puns on the acronym GAN.
  • Pindar Van Arman – Driven by a collection of algorithms programmed to work with — and against — one another, the U.S.-based artist and roboticist’s creation uses a paintbrush, paint and canvas to create portraits that fuse the look and feel of a photo and a handmade sketch.

For a closer look, registered GTC attendees can go on a live, personal tour of two of our featured artists’ studios.

On Thursday, Oct. 8, you can virtually tour Van Arman’s Fort Worth, Texas, studio between 11 a.m.-12 p.m. Pacific time. And at 2 p.m. Pacific, you can tour Refik Anadol’s Los Angeles studio.

In addition, a pair of panel discussions, Thursday, Oct. 8, with AI Gallery artists will explore what led them to connect AI and fine art.

And starting Oct. 5, you can tune in to an on-demand GTC session featuring Oxia Palus co-founder George Cann, a Ph.D. candidate in space and climate physics at University College London.

Join us at the AI Art Gallery.

Register for GTC

The post AI in the Hand of the Artist appeared first on The Official NVIDIA Blog.

Read More

Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE

Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE

One of the leading EV startups in China is charging up its compute capabilities.

Li Auto announced today it would develop its next generation of electric vehicles using the high-performance, energy-efficient NVIDIA DRIVE AGX Orin. These new vehicles will be developed in collaboration with tier 1 supplier Desay SV and feature advanced autonomous driving features, as well as extended battery range for truly intelligent mobility.

The startup has become a standout brand in China over the past year. Its electric model lineup has led domestic sales of medium and large SUVs for eight consecutive months. With this latest announcement, the automaker can extend its lead to the autonomous driving industry.

NVIDIA Orin, the SoC at the heart of the future fleet, achieves 200 TOPS — nearly 7x the performance and 3x the energy efficiency of our previous generation SoC — and is designed to handle the large number of applications and deep neural networks that run simultaneously for automated and autonomous driving. Orin is designed to achieve the systematic safety standards such as ISO 26262 ASIL-D.

This centralized, high-performance system will enable software-defined, intelligent features in Li Auto’s upcoming electric vehicles, making them a smart choice for eco-friendly, safe and convenient driving.

“By cooperating with NVIDIA, Li Auto can benefit from stronger performance and the energy-efficient compute power needed to deliver both advanced driving and fully autonomous driving solutions to market,” said Kai Wang, CTO of Li Auto.

A Software-Defined Architecture

Today, a vehicle’s software functions are powered by dozens of electronic control units, known as ECUs, that are distributed throughout the car. Each is specialized — one unit controls windows and one the door locks, for example, and others control power steering and braking.

This fixed-function architecture is not compatible with intelligent and autonomous features. These AI-powered capabilities are software-defined, meaning they are constantly improving, and require a hardware architecture that supports frequent upgrades.

Vehicles equipped with NVIDIA Orin have the powerful, centralized compute necessary for this software-defined architecture. The SoC was born out of the data center, built with approximately 17 billion transistors to handle the large number of applications and deep neural networks for autonomous systems and AI-powered cockpits.

The NVIDIA Orin SoC

This high-performance platform will enable Li Auto to become one of the first automakers in China to deploy an independent, advanced autonomous driving system with its next-generation fleet.

The Road Ahead

This announcement is just the first step of a long-term collaboration between NVIDIA and Li Auto.

“The next-generation NVIDIA Orin SoC offers a significant leap in compute performance and energy efficiency,” said Rishi Dhall, vice president of autonomous vehicles at NVIDIA. “NVIDIA works closely with companies like Li Auto to help bring new AI-based autonomous driving capabilities to cutting-edge EVs in China and around the globe.”

By combining NVIDIA’s leadership in AI software and computing with Li Auto’s momentum in the electric vehicle space, together, these companies will develop vehicles that are better for the environment and safer for everyone.

The post Li Auto Aims to Extend Lead in Chinese EV Market with NVIDIA DRIVE appeared first on The Official NVIDIA Blog.

Read More