Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse

Despite the pandemic putting in-person training on hold, organizations can still offer instructor-led courses to their staff to develop key skills in AI, data science and accelerated computing.

NVIDIA’s Deep Learning Institute offers many online courses that deliver hands-on training. One of its most popular — recently updated and retitled as The Fundamentals of Deep Learning — will be taken by hundreds of attendees at next week’s GPU Technology Conference, running Oct. 5-9.

Organizations interested in boosting the deep learning skills of their personnel can arrange to get their teams trained by requesting a workshop from the DLI Course Catalog.

“Technology professionals who take our revamped deep learning course will emerge with the basics they need to start applying deep learning to their most challenging AI and machine learning applications,” said Craig Clawson, director of Training Services at NVIDIA. “This course is a key building block for developing a cutting-edge AI skillset.”

Huge Demand for Deep Learning

Deep learning is at the heart of the fast-growing fields of machine learning and AI. This makes it a skill that’s in huge demand and has put companies across industries in a race to recruit talent. Linkedin recently reported that the fastest growing job category in the U.S. is AI specialist, with annual job growth of 74 percent and an average annual salary of $136,000.

For many organizations, especially those in the software, internet, IT, higher education and consumer electronics sectors, investing in upskilling current employees can be critical to their success while offering a path to career advancement and increasing worker retention.

Deep Learning Application Development-

With interest in the field heating up, a recent article in Forbes highlighted that AI and machine learning, data science and IoT are among the most in-demand skills tech professionals should focus on. In other words, tech workers who lack these skills could soon find themselves at a professional disadvantage.

By developing needed skills, employees can make themselves more valuable to their organizations. And their employers benefit by embedding machine learning and AI functionality into their products, services and business processes.

“Organizations are looking closely at how AI and machine learning can improve their business,” Clawson said. “As they identify opportunities to leverage these technologies, they’re hustling to either develop or import the required skills.”

Get a glimpse of the DLI experience in this short video:

DLI Courses: An Invaluable Resource

The DLI has trained more than 250,000 developers globally. It has continued to deliver a wide range of training remotely via virtual classrooms during the COVID-19 pandemic.

Classes are taught by DLI-certified instructors who are experts in their fields, and breakout rooms support collaboration among students, and interaction with the instructors.

And by completing select courses, students can earn an NVIDIA Deep Learning Institute certificate to demonstrate subject matter competency and support career growth.

It would be hard to exaggerate the potential that this new technology and the NVIDIA developer community holds for improving the world — and the community is growing faster than ever. It took 13 years for the number of registered NVIDIA developers to reach 1 million. Just two years later, it has grown to over 2 million.

Whether enabling new medical procedures, inventing new robots or joining the effort to combat COVID-19, the NVIDIA developer community is breaking new ground every day.

Courses like the re-imagined Fundamentals of Deep Learning are helping developers and data scientists deliver breakthrough innovations across a wide range of industries and application domains.

“Our courses are structured to give developers the skills they need to thrive as AI and machine learning leaders,” said Clawson. “What they take away from the courses, both for themselves and their organizations, is immeasurable.”

To get started on the journey of transforming your organization into an AI powerhouse, request a DLI workshop today.

What is deep learning? Read more about this core technology.

The post Get Trained, Go Deep: How Organizations Can Transform Their Workforce into an AI Powerhouse appeared first on The Official NVIDIA Blog.

Read More

How AI Startup BroadBridge Networks Helps Security Teams Make Sense of Data Chaos

How AI Startup BroadBridge Networks Helps Security Teams Make Sense of Data Chaos

Cybersecurity has grown into a morass.

With increasingly hybrid computing environments, dispersed users accessing networks around the clock, and the Internet of Things creating more data than security teams have ever seen, organizations are throwing more security tools than ever at the problem.

In fact, Jonathan Flack, principal systems architect at BroadBridge Networks, said it’s not unusual for a company with a big network and a large volume of intellectual property to have 50 to 75 vendor solutions deployed within their networks.

“That’s insanity to me,” said Flack. “How can you converge all the information in a single space in order to effectively to act upon it in context?”

That’s precisely the problem BroadBridge, based in Fremont, Calif., is looking to solve. The three-year-old company is a member of NVIDIA Inception, a program that provides AI startups go-to-market support, expertise and technology.

It’s applying AI, powered by NVIDIA GPUs, to security data such that varying data sources can be aligned temporally, essentially connecting all the dots for any moment in time.

A company might have active directory logs, Windows event logs and firewall logs, with events occurring within microseconds of each other. Overworked security staff don’t have time to fish through all those logs trying to align events.

Instead, BroadBridge does it for them, automatically collecting the data, correlating it and presenting it as a single slice of time, with precision down to the millisecond.

The company’s software effectively pinpoints the causes of events and suggests potential actions to be taken. And given that most security teams are understaffed amid a global shortage of qualified cybersecurity employees, they can use all the help they can get.

“Our objective is to lighten the workload so those people can go home after an eight-hour shift, spend time with their families and have some down time,” said Flack. “If you find an intrusion six months ago, you shouldn’t have to go mine through logs from all the affected systems to reassemble a picture of what  happened. With all that data properly aggregated, aligned, and archived you simply run a BlazingSQL query against all of your network data for that specific timeframe.”

Organic Approach to Data

While BroadBridge’s original models were trained on open-source data from the security community, the company’s AI approach is different from other companies in that providing a more mature model out of the gate isn’t necessary. Instead, BroadBridge’s system is designed to be trained by each customer’s network.

“GM is going to have a different threat environment than some DoD office inside the Pentagon,” said Flack. “We provide a good initial starting point, and then we retrain the model using the customer’s own network data over time. The system is 100 percent self-reinforcing.”

The initial AI model provides security analysts with the ability to work through events that need to be investigated. They can triage and tag events as nominal or deserving of more investigation.

That metadata then gets stored, providing a record of what the inference server identified, what the analyst looked at, and what other events are worthy of analysis. All of that is then funneled into a deep learning pipeline that improves the model.

BroadBridge uses Kubernetes and Docker to provide dynamic scaling. Flack said the software can run real-time analytics on a 100GB network. The customer’s deep learning process is uploaded to an NVIDIA GPU instance on AWS, Azure, Google or Oracle clouds, where the AI is trained on the specifics of the customer’s network.

The company’s internal development has unfolded on NVIDIA DGX systems, which are purpose-built for the unique demands of AI. The first wave of development was conducted on DGX-1, and more recently on DGX A100, which Flack said has improved performance significantly.

“Four or five years ago, none of what we’re doing was at all possible,” he said. “Now we have a way to run multiple concurrent GPU-based workloads on systems that are as affordable as some 1U appliances.”

More to Come

Down the line, Flack said he envisions exposing an API to third-party vendors so they can use BroadBridge’s data to dynamically reconfigure device security postures. He also foresees the arrival of 5G as boosting the need for a tool that can parse through the increased data flows.

More immediately, Flack said the company has been looking to address the limitations of virtual private networks in the wake of the huge increase in working from home due to the COVID-19 pandemic.

Flack was careful to note that BroadBridge has no interest in replacing any of the sensors, logs or assessment tools companies are deploying in their security operations centers, or SOCs. Rather, it’s simply trying to create a platform to help security analysts make sense of all the data coming from all of these sources.

“Most of what you’re paying your SOC analysts for is herding cats,” he said. “Our objective is to stop them from herding cats so they can perform actual analysis.”

See BroadBridge Networks present in the NVIDIA Inception Premier Showcase at the GPU Technology Conference on Tuesday, October 6. Register for GTC here.

The post How AI Startup BroadBridge Networks Helps Security Teams Make Sense of Data Chaos appeared first on The Official NVIDIA Blog.

Read More

AI for Every Enterprise: NVIDIA, VMware CEOs Discuss Broad New Partnership

AI for Every Enterprise: NVIDIA, VMware CEOs Discuss Broad New Partnership

Promising to bring AI to every enterprise, VMware CEO Pat Gelsinger and NVIDIA CEO Jensen Huang kicked off VMworld 2020 Tuesday with a conversation detailing the companies’ broad new partnership.

VMware and NVIDIA announced that, together, they will deliver an end-to-end enterprise platform for AI as well as a new architecture for data center, cloud and edge that uses NVIDIA DPUs to support existing and next-generation applications.

“We’re going to bring the power of AI to every enterprise. We’re going to bring the NVIDIA AI computing platform and our AI application frameworks onto VMware,” Huang said.

View today’s VMworld 2020 CEO discussion featuring Pat Gelsinger and Jensen Huang, and join us at GTC 2020 on October 5 to learn more.

Through this collaboration, the rich set of AI software available on the NVIDIA NGC hub will be integrated into VMware vSphere, VMware Cloud Foundation and VMware Tanzu.

“For every virtual infrastructure admin, we have millions of people that know how to run the vSphere stack,” Gelsinger said. “They’re running it every day, all day long, it’s now the same tools, the same processes, the same networks, the same security, is now fully being made available on the GPU infrastructure.”

VMware CEO Pat Gelsinger

This will help accelerate AI adoption, enabling enterprises to extend existing infrastructure for AI, manage all applications with a single set of operations, and deploy AI-ready infrastructure where the data resides, across the data center, cloud and edge.

Additionally, as part of VMware’s Project Monterey, also announced Tuesday, the companies will partner to deliver an architecture for the hybrid cloud based on SmartNIC technology, including NVIDIA’s programmable BlueField-2 DPU.

“The characteristics, the pillars of Project Monterey of offloading the operating system, the data center operating system, onto the SmartNIC, isolating the applications from the control plane and the data plane, and accelerating the data processing and the security processing to line speed is going to make the data center so much more powerful, so much more performant,” Huang said.

Among the organizations integrating their VMware and NVIDIA ecosystems is the UCSF Center for Intelligent Imaging.

“I can’t imagine a more impactful use of AI than healthcare,” Huang said. “The intersection of people, disease and treatments is one of the greatest challenges of humanity, and one where AI will be needed to move the needle.”

A leader in the development of AI and analysis tools in medical imaging, the center uses the NVIDIA Clara healthcare application framework for AI-powered imaging and VMware Cloud Foundation to support a broad range of mission-critical workloads.

“This way of doing computing is going to be the way that the future data centers are built. It’s going to allow us to essentially turn every enterprise into an AI,” Huang said. “Every company will become AI-driven.”

“Our audience is so excited to see how we’re coming together, to see how everything they’ve done for the past two decades with VMware now it’s going to be even further expanded,” Gelsinger said.

The post AI for Every Enterprise: NVIDIA, VMware CEOs Discuss Broad New Partnership appeared first on The Official NVIDIA Blog.

Read More

The Power of Two: VMware, NVIDIA Bring AI to the Virtual Data Center

The Power of Two: VMware, NVIDIA Bring AI to the Virtual Data Center

Two key components of enterprise AI just snapped in place thanks to longtime partners who pioneered virtual desktops, virtual graphics workstations and more.

Taking their partnership to a new level, VMware and NVIDIA are uniting accelerated computing and virtualization to bring the power of AI to every company.

It’s a collaboration that will enable users to run data analytics and machine learning workloads in containers or virtual machines, secured and managed with familiar VMware tools. It will create a new sweet spot in hybrid cloud computing with greater control, lowered costs and expanded performance.

The partnership plants behind the firewalls of private companies the power of AI that public clouds provide from the world’s largest AI data centers.

The two companies will demonstrate these capabilities this week at VMworld.

Welcome to the Modern, Accelerated Data Center

Thanks to this collaboration, users will be able to run AI and data science software from NGC Catalog, NVIDIA’s hub for GPU-optimized AI software, using containers or virtual machines in a hybrid cloud based on VMware Cloud Foundation. It’s the kind of accelerated computing that’s a hallmark of the modern data center.

NVIDIA and VMware also launched a related effort enabling users to build a more secure and powerful hybrid cloud accelerated by NVIDIA BlueField-2 DPUs. These data processing units are built to offload and accelerate software-defined storage, security and networking tasks, freeing up CPU resources for enterprise applications.

Enterprises Gear Up for AI

Machine learning lets computers write software humans never could. It’s a capability born in research labs that’s rapidly spreading to data centers across every industry from automotive and banking to healthcare, retail and more.

The partnership will let VMware users train and run neural networks across multiple GPUs in public and private clouds. It also will enable them to share a single GPU across multiple jobs or users thanks to the multi-instance capabilities in the latest NVIDIA A100 GPUs.

To achieve these goals, the two companies will bring GPU acceleration to VMware vSphere to run AI and data-science jobs at near bare-metal performance next to existing enterprise apps on standard enterprise servers. In addition, software and models in NGC will support VMware Tanzu.

With these links, AI workloads can be virtualized and virtual environments become AI-ready without sacrificing system performance. And users can create hybrid clouds that give them the choice to run jobs in private or public data centers.

Companies will no longer need standalone AI systems for machine learning or big data analytics that are separate from their IT resources. Now a single enterprise infrastructure can run AI and traditional workloads managed by VMware tools and administrators.

“We’re providing the best of both worlds by bringing mature management capabilities to bare-metal systems and great performance to virtualized AI workloads,” said Kit Colbert, vice president and CTO of VMware’s cloud platform group.

Demos Show the Power of Two

Demos at VMworld will show a platform that delivers AI results fast as the public cloud and robust enough to tackle critical jobs like fighting COVID-19. They will run containers from NVIDIA NGC, managed by Tanzu, on VMware Cloud Foundation.

We’ll show those same VMware environments also tapping into the power of BlueField-2 DPUs to secure and accelerate hybrid clouds that let remote designers collaborate in an immersive, real-time environment.

That’s just the beginning. NVIDIA is committed to giving VMware the support to be a first-class platform for everything we build. In the background, VMware and NVIDIA engineers are driving a multi-year effort to deliver game-changing capabilities.

Colbert of VMware agreed. “We view the two initiatives we’re announcing today as initial steps, and there is so much more we can do. We invite customers to tell us what they need most to help prioritize our work,” he said.

To learn more, register for the early-access program and tune in to VMware sessions at GTC 2020 next week.

 

 

The post The Power of Two: VMware, NVIDIA Bring AI to the Virtual Data Center appeared first on The Official NVIDIA Blog.

Read More

Networks on Steroids: VMware, NVIDIA Power the Data Center with DPUs

Networks on Steroids: VMware, NVIDIA Power the Data Center with DPUs

The data center’s grid is about to plug in to a new source of power.

It rides a kind of network interface card called a SmartNIC. Its smarts and speed spring from an ASIC called a data processing unit.

In short, the DPU packs the power of data center infrastructure on a chip.

DPU-enabled SmartNICs will be available for millions of virtualized servers thanks to a collaboration between VMware and NVIDIA. They bring advances in security and storage as well as networking that will stretch from the core to the edge of the corporate network.

What’s more, the companies announced a related initiative that will put the power of the public AI cloud behind the corporate firewall. It enables enterprise AI managed with familiar VMware tools.

Lighting Up the Modern Data Center

Together, these efforts will give users the choice to run machine learning workloads in containers or virtual machines, secured and managed with familiar VMware tools. And they will create a new sweet spot in hybrid cloud computing with greater control, lowered costs and the highest performance.

Laying the foundation for these capabilities, the partnership will help users build more secure and powerful distributed networks inside VMware Cloud Foundation, powered by the NVIDIA BlueField-2 DPU. It’s the Swiss Army knife of data center infrastructure that can accelerate security, storage, networking, and management tasks, freeing up CPUs to focus on enterprise applications.

The DPU’s jobs include:

  • Blocking malware
  • Advanced encryption
  • Network virtualization
  • Load balancing
  • Intrusion detection and prevention
  • Data compression
  • Packet switching
  • Packet inspection
  • Managing pools of solid-state and hard-disk storage

Our DPUs can run these tasks today across two ports, each carrying traffic at 100 Gbit/second. That’s an order of magnitude faster than CPUs geared for enterprise apps. The DPU is taking on these jobs so CPU cores can run more apps, boosting vSphere and data center efficiency.

As a result, data centers can handle more apps and their networks will run faster, too.

“The BlueField-2 SmartNIC is a fundamental building block for us because we can take advantage of its DPU hardware for better network performance and dramatically reduced cost to operate data center infrastructure,” said Kit Colbert, vice president and CTO of VMware’s cloud platform group.

NVIDIA BlueField-2 DPU in VMware's Project Monterey
Running VMware Cloud Foundation on the NVIDIA BlueField-2 DPU provides security isolation and lets CPUs support more apps per server.

Securing the Data Center with DPUs

DPUs also will usher in a new era of advanced security.

Today, most companies run their security policies on the same CPUs that run their applications. That kind of multitasking leaves IT departments vulnerable to malware or attacks in the guise of a new app.

With the BlueField DPU, all apps and requests can be vetted on a processor isolated from the application domain, enforcing security and other policies. Many cloud computing services already use this approach to create so-called zero-trust environments where software authenticates everything.

VMware is embracing SmartNICs in its products as part of an initiative called Project Monterey. With SmartNICs, corporate data centers can take advantage of the same advances Web giants enjoy.

“These days the traditional security perimeter is gone. So, we believe you need to root security in the hardware of the SmartNIC to monitor servers and network traffic very fast and without performance impacts,” said Colbert.

BlueField-2 DPU demo with VMwar
A demo shows an NVIDIA BlueField-2 DPU preventing a DDOS attack that swamps a CPU.

See DPUs in Action at VMworld

The companies are demonstrating these capabilities this week at VMworld. For example, the demo below shows how virtual servers running VMware ESXi clients can use Bluefield-2 DPUs to stop a distributed denial-of-service attack in a server cluster.

Leading OEMs are already preparing to bring the capabilities of DPUs to market. NVIDIA also plans to support BlueField-2 SmartNICs across its portfolio of platforms including its EGX systems for enterprise and edge computing.

You wouldn’t hammer a nail with a monkey wrench or pound in a screw with a hammer — you need to use the right tool for the job. To build the modern data center network, that means using an NVIDIA DPU enabled by VMware.

The post Networks on Steroids: VMware, NVIDIA Power the Data Center with DPUs appeared first on The Official NVIDIA Blog.

Read More

Drug Discovery in the Age of COVID-19

Drug Discovery in the Age of COVID-19

Drug discovery is like searching for the right jigsaw tile — in a puzzle box with 1060 molecular-size pieces. AI and HPC tools help researchers more quickly narrow down the options, like picking out a subset of correctly shaped and colored puzzle pieces to experiment with.

An effective small-molecule drug will bind to a target enzyme, receptor or other critical protein along the disease pathway. Like the perfect puzzle piece, a successful drug will be the ideal fit, possessing the right shape, flexibility and interaction energy to attach to its target.

But it’s not enough just to interact strongly with the target. An effective therapeutic must modify the function of the protein in just the right way, and also possess favorable absorption, distribution, metabolism, excretion and toxicity properties — creating a complex optimization problem for scientists.

Researchers worldwide are racing to find effective vaccine and drug candidates to inhibit infection with and replication of SARS-CoV-2, the virus that causes COVID-19. Using NVIDIA GPUs, they’re accelerating this lengthy discovery process — whether for structure-based drug design, molecular docking, generative AI models, virtual screening or high-throughput screening.

Identifying Protein Targets with Genomics

To develop an effective drug, researchers have to know where to start. A disease pathway — a chain of signals between molecules that trigger different cell functions — may involve thousands of interacting proteins. Genomic analyses can provide invaluable insights for researchers, helping them identify promising proteins to target with a specific drug.

With the NVIDIA Clara Parabricks genome analysis toolkit, researchers can sequence and analyze genomes up to 50x faster. Given the unprecedented spread of the COVID pandemic, getting results in hours versus days can have an extraordinary impact on understanding the virus and developing treatments.

To date, hundreds of institutions, including hospitals, universities and supercomputing centers, in 88 countries have downloaded the software to accelerate their work — to sequence the viral genome itself, as well as to sequence the DNA of COVID patients and investigate why some are more severely affected by the virus than others.

Another method, cryo-EM, uses electron microscopes to directly observe flash-frozen proteins — and can harness GPUs to shorten processing time for the complex, massive datasets involved.

Using CryoSPARC, a GPU-accelerated software built by Toronto startup Structura Biotechnology, researchers at the National Institutes of Health and the University of Texas at Austin created the first 3D, atomic-scale map of the coronavirus, providing a detailed view into the virus’ spike proteins, a key target for vaccines, therapeutic antibodies and diagnostics.

GPU-Accelerated Compound Screening

Once a target protein has been identified, researchers search for candidate compounds that have the right properties to bind with it. To evaluate how effective drug candidates will be, researchers can screen drug candidates virtually, as well as in real-world labs.

New York-based Schrödinger creates drug discovery software that can model the properties of potential drug molecules. Used by the world’s biggest biopharma companies, the Schrödinger platform allows its users to determine the binding affinity of a candidate molecule on NVIDIA Tensor Core GPUs in under an hour and with just a few dollars of compute cost — instead of many days and thousands of dollars using traditional methods.

Generative AI Models for Drug Discovery

Rather than evaluating a dataset of known drug candidates, a generative AI model starts from scratch. Tokyo-based startup Elix, Inc., a member of the NVIDIA Inception virtual accelerator program, uses generative models trained on NVIDIA DGX Station systems to come up with promising molecular structures. Some of the AI’s proposed molecules may be unstable or difficult to synthesize, so additional neural networks are used to determine the feasibility for these candidates to be tested in the lab.

With DGX Station, Elix achieves up to a 6x speedup on training the generative models, which would otherwise take a week or more to converge, or to reach the lowest possible error rate.

Molecular Docking for COVID-19 Research

With the inconceivable size of the chemical space, researchers couldn’t possibly test every possible molecule to figure out which will be effective to combat a specific disease. But based on what’s known about the target protein, GPU-accelerated molecular dynamics applications can be used to approximate molecular behavior and simulate target proteins at the atomic level.

Software like AutoDock-GPU, developed by the Center for Computational Structural Biology at the Scripps Research Institute, enables researchers to calculate the interaction energy between a candidate molecule and the protein target. Known as molecular docking, this computationally complex process simulates millions of different configurations to find the most favorable arrangement of each molecule for binding. Using the more than 27,000 NVIDIA GPUs on Oak Ridge National Laboratory’s Summit supercomputer, scientists were able to screen 1 billion drug candidates for COVID-19 in just 12 hours. Even using a single NVIDIA GPU provides more than 230x speedup over using a single CPU.

Argonne deployed one of the first DGX-A100 systems. Courtesy of Argonne National Laboratory.

In Illinois, Argonne National Laboratory is accelerating COVID-19 research using an NVIDIA A100 GPU-powered system based on the DGX SuperPOD reference architecture. Argonne researchers are combining AI and advanced molecular modelling methods to perform accelerated simulations of the viral proteins, and to screen billions of potential drug candidates, determining the most promising molecules to pursue for clinical trials.

Accelerating Biological Image Analysis

The drug discovery process involves significant high-throughput lab experiments as well. Phenotypic screening is one method of testing, in which a diseased cell is exposed to a candidate drug. With microscopes, researchers can observe and record subtle changes in the cell to determine if it starts to more closely resemble a healthy cell. Using AI to automate the process, thousands of possible drugs can be screened.

Digital biology company Recursion, based in Salt Lake City, uses AI and NVIDIA GPUs to observe these subtle changes in cell images, analyzing terabytes of data each week. The company has released an open-source COVID dataset, sharing human cellular morphological data with researchers working to create therapies for the virus.

Future Directions in AI for Drug Discovery

As AI and accelerated computing continue to accelerate genomics and drug discovery pipelines, precision medicine — personalizing individual patients’ treatment plans based on insights about their genome and their phenotype — will become more attainable.

Increasingly powerful NLP models will be applied to organize and understand massive datasets of scientific literature, helping connect the dots between independent investigations. Generative models will learn the fundamental equations of quantum mechanics and be able to suggest the optimal molecular therapy for a given target.

To learn more about how NVIDIA GPUs are being used to accelerate drug discovery, check out talks by Schrödinger, Oak Ridge National Laboratory and Atomwise at the GPU Technology Conference next week.

For more on how AI and GPUs are advancing COVID research, read our blog stories and visit the COVID-19 research hub.

Subscribe to NVIDIA healthcare news here

The post Drug Discovery in the Age of COVID-19 appeared first on The Official NVIDIA Blog.

Read More

AI in Schools: Sony Reimagines Remote Learning with Artificial Intelligence

AI in Schools: Sony Reimagines Remote Learning with Artificial Intelligence

Back to school was destined to look different this year.

With the world adapting to COVID-19, safety measures are preventing a return to in-person teaching in many places. Also, students learning through conventional video conferencing systems often feel the content is difficult to read, or teachers block the words written on presentation boards.

Faced with these challenges, educators at Prefectural University of Hiroshima in Japan envisioned a high-quality remote learning system with additional features not possible with traditional video conferencing.

They chose a distance-learning solution from Sony that links lecturers and students across their three campuses. It uses AI to make it easy for presenters anywhere to engage their audiences and impart information using captivating video. Thanks to these innovations, lecturers at Prefectural University can now teach students simultaneously on three campuses linked by a secure virtual private network.

Sony remote learning solution
Sony’s remote learning solution in action, with Edge Analytics Appliance, remote cameras and projectors.

AI Helps Lecturers Get Smarter About Remote Learning

At the heart of Prefectural’s distance learning system is Sony’s REA-C1000 Edge Analytics Appliance, which was developed using the NVIDIA Jetson Edge AI platform. The appliance lets teachers and speakers quickly create dynamic video presentations without using expensive video production gear or learning sophisticated software applications.

Sony’s exclusive AI algorithms run inside the appliance. These deep learning models employ techniques such as automatic tracking, zooming and cropping to allow non-specialists to produce engaging, professional-quality video in real time.

Users simply connect the Edge Analytics Appliance to a camera that can pan, tilt and zoom; a PC; and a display or recording device. In Prefectural’s case, multiple cameras capture what a lecturer writes on the board, questions and contributions from students, and up to full HD images depending on the size of the lecture hall.

Managing all of this technology is made simple for the lecturers. A touchscreen panel facilitates intuitive operation of the system without the need for complex adjustment of camera settings.

Sony remote learning solution

Teachers Achieve New Levels of Transparency

One of the landmark applications in the Edge Analytics Appliance is handwriting extraction, which lets students experience lectures more fully, rather than having to jot down notes.

The application uses a camera to record text and figures as an instructor writes them by hand on a whiteboard or blackboard, and then immediately draws them as if they are floating in front of the instructor.

Students viewing the lecture live from a remote location or from a recording afterward can see and recognize the text and diagrams, even if the original handwriting is unclear or hidden by the instructor’s body. The combined processing power of the compact, energy-efficient Jetson TX2 and Sony’s moving/unmoving object detection technology makes the transformation from the board to the screen seamless.

Handwriting extraction is also customizable: the transparency of the floating text and figures can be adjusted, so that characters that are faint or hard to read can be highlighted in color, making them more legible — and even more so than the original content written on the board.

Create Engaging Content Without Specialist Resources

 

Another innovative application is Chroma key-less CG overlay, using state-of-the-art algorithms from Sony, like moving-object detection, to produce class content without the need for large-scale video editing equipment.

Like a personal greenscreen for presenters, the application seamlessly places the speaker in front of any animations, diagrams or graphs being presented.

Previously, moving-object detection algorithms required for this kind of compositing could only be run on professional workstations. With Jetson TX2, Sony was able to include this powerful deep learning-based feature within the compact, simple design of the Edge Analytics Appliance.

A Virtual Camera Operator

Numerous additional algorithms within the appliance include those for color-pattern matching, shape recognition, pose recognition and more. These enable features such as:

  • PTZ Auto Tracking — automatically tracks an instructor’s movements and ensures they stay in focus.
  • Focus Area Cropping — crops a specified portion from a video recorded on a single camera and creates effects as if the cropped portion were recorded on another camera. This can be used to generate, for example, a picture-in-picture effect, where an audience can simultaneously see a close-up of the presenter speaking against a wide shot of the rest of the stage.
  • Close Up by Gesture — automatically zooms in on and records students or audience members who stand up in preparation to ask a question.

With the high-performance Jetson platform, the Edge Analytics Appliance can easily handle a wide range of applications like these. The result is like a virtual camera operator that allows people to create engaging, professional-looking video presentations without the expertise or expense previously required to do so.

Officials at Prefectural University of Hiroshima say the new distance learning initiative has already led to greater student and teacher satisfaction with remote learning. Linking the university’s three campuses through the system is also fostering a sense of unity among the campuses.

“We chose Sony’s Edge Analytics Appliance for our new distance learning design because it helps us realize a realistic and comfortable learning environment for students by clearly showing the contents on the board and encouraging discussion. It was also appealing as a cost-effective solution as teachers can simply operate without additional staff,” said Kyousou Kurisu, director of public university corporation, Prefectural University of Hiroshima.

Sony plans to continually update applications available on the Edge Analytics Appliance. So, like any student, the system will only get better over time.

The post AI in Schools: Sony Reimagines Remote Learning with Artificial Intelligence appeared first on The Official NVIDIA Blog.

Read More

Whether It’s Rembrandt or Toilets, ‘Curiosity About How Things Work’ Is Key to Innovation, CGI Legend Pat Hanrahan Says

Whether It’s Rembrandt or Toilets, ‘Curiosity About How Things Work’ Is Key to Innovation, CGI Legend Pat Hanrahan Says

You may have never heard of Pat Hanrahan, but you have almost certainly seen his work.

His list of credits includes three Academy Awards, and his work on Pixar’s RenderMan rendering technology enabled Hollywood megahits Toy Story, Finding Nemo, Cars and Jurassic Park.

Hanrahan also founded Tableau Software — snatched up by Salesforce last year for nearly $16 billion — and has mentored countless technology companies as a Stanford professor.

Hanrahan is the most recent winner of the Turing Award, along with his longtime friend and collaborator Ed Catmull, a former president at Pixar and Disney Animation Studios. The award — a Nobel Prize, of sorts, in computer science —  was for their work in 3D computer graphics and computer-generated imagery.

He spoke Thursday at NTECH, NVIDIA’s annual internal engineering conference. The digital event was followed by a virtual chat between NVIDIA CEO Jensen Huang and Hanrahan, who taught a computer graphics course at NVIDIA’s Silicon Valley campus during its early days.

While the theme of his address was “You Can Be an Innovator,” the main takeaway is that a “curiosity about how things work” is a prerequisite.

Hanrahan said his own curiosity for art and studying how Rembrandt painted flesh tones led to a discovery. Artists of that Baroque period, he said, applied a technique in oil painting with layers, called impasto, for depth of skin tone. This led to his own deeper study of light’s interaction with translucent surfaces.

“Artists, they sort of instinctively figured it out,” he said. “They don’t know about the physics of light transport. Inspired by this whole idea of Rembrandt’s, I came up with a mathematical model.”

Hanrahan said innovative people need to be instinctively curious. He tested that out himself when interviewing job candidates in the early days of Pixar. “I asked everybody that I wanted to hire into the engineering team, ‘How does a toilet work?’ To be honest, most people did not know how their toilet worked,” he said, “and these were engineers.”

At the age of seven 7, he’d already lifted the back cover of the toilet to find out what made it work.

Hanrahan worked with Steve Jobs at Pixar. Jobs’s curiosity and excitement about touch-capacitive sensors — technology that dated back to the 1970s — would eventually lead to the touch interface of the iPhone, he said.

After the talk, Huang joined the video feed from his increasingly familiar kitchen at home and interviewed Hanrahan. The wide-ranging conversation was like a time machine, with questions and reminisces looking back 20 years and discussions peering forward to the next 20.

The post Whether It’s Rembrandt or Toilets, ‘Curiosity About How Things Work’ Is Key to Innovation, CGI Legend Pat Hanrahan Says appeared first on The Official NVIDIA Blog.

Read More

New Earth Simulator to Take on Planet’s Biggest Challenges

New Earth Simulator to Take on Planet’s Biggest Challenges

A new supercomputer under construction is designed to tackle some of the planet’s toughest life sciences challenges by speedily crunching vast quantities of environmental data.

The Japan Agency for Marine-Earth Science and Technology, or JAMSTEC, has commissioned tech giant NEC to build the fourth generation of its Earth Simulator. The new system, scheduled to become operational in March, will be based around SX-Aurora TSUBASA vector processors from NEC and NVIDIA A100 Tensor Core GPUs, all connected with NVIDIA Mellanox HDR 200Gb/s InfiniBand networking.

This will give it a maximum theoretical performance of 19.5 petaflops, putting it in the highest echelons of the TOP500 supercomputer ratings.

The new system will benefit from a multi-architecture design, making it suited to various research and development projects in the earth sciences field. In particular, it will act as an execution platform for efficient numerical analysis and information creation, coordinating data relating to the global environment.

Its work will span marine resources, earthquakes and volcanic activity. Scientists will gain deeper insights into cause-and-effect relationships in areas such as crustal movement and earthquakes.

The Earth Simulator will be deployed to predict and mitigate natural disasters, potentially minimizing loss of life and damage in the event of another natural disaster like the earthquake and tsunami that hit Japan in 2011.

Earth Simulator will achieve this by running large-scale simulations at high speed in ways previous generations of Earth Simulator couldn’t. The intent is also to have the system play a role in helping governments develop a sustainable socio-economic system.

The new Earth Simulator promises to deliver a multitude of vital environmental information. It also represents a quantum leap in terms of its own environmental footprint.

Earth Simulator 3, launched in 2015, offered a performance of 1.3 petaflops. It was a world beater at the time, outstripping Earth Simulators 1 and 2, launched in 2002 and 2009, respectively.

The fourth-generation model will deliver more than 15x the performance of its predecessor, while keeping the same level of power consumption and requiring around half the footprint. It’s able to achieve these feats thanks to major research and development efforts from NVIDIA and NEC.

The latest processing developments are also integral to the Earth Simulator’s ability to keep up with rising data levels.

Scientific applications used for earth and climate modelling are generating increasing amounts of data that require the most advanced computing and network acceleration to give researchers the power they need to simulate and predict our world.

NVIDIA Mellanox HDR 200Gb/s InfiniBand networking with in-network compute acceleration engines combined with NVIDIA A100 Tensor Core GPUs and NEC SX-Aurora TSUBASA provides JAMSTEC a world-leading marine research platform critical for expanding earth and climate science and accelerating discoveries.

The post New Earth Simulator to Take on Planet’s Biggest Challenges appeared first on The Official NVIDIA Blog.

Read More

Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim

Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim

When it comes to autonomous vehicle simulation testing, every detail must be on point.

With its high-fidelity automotive simulation model (ASM) on NVIDIA DRIVE Sim, global automotive supplier dSPACE is helping developers keep virtual self-driving true to the real world. By combining the modularity and openness of the DRIVE Sim simulation software platform with highly accurate vehicle models like dSPACE’s, every minor aspect of an AV can be thoroughly recreated, tested and validated.

The dSPACE ASM vehicle dynamics model makes it possible to simulate elements of the car — suspension, tires, brakes — all the way to the full vehicle powertrain and its interaction with the electronic control units that power actions such as steering, braking and acceleration.

As the world continues to work from home, simulation has become an even more crucial tool in autonomous vehicle development. However, to be effective, it must be able to translate to real-world driving.

dSPACE’s modeling capabilities are key to understanding vehicle behavior in diverse conditions, enabling the exhaustive and high-fidelity testing required for safe self-driving deployment.

Detailed Validation

High-fidelity simulation is more than just a realistic-looking car driving in a recreated traffic scenario. It means in any given situation, the simulated vehicle will behave just as a real vehicle driving in the real world would.

If an autonomous vehicle suddenly brakes on a wet road, there are a range of forces that affect how and where the vehicle stops. It could slide further than intended or fishtail, depending on the weather and road conditions. These possibilities require the ability to simulate dynamics such as friction and yaw, or the way the vehicle moves vertically.

The dSPACE ASM vehicle dynamics model includes these factors, which can then be compared with a real vehicle in the same scenario. It also tests how the same model acts in different simulation environments, ensuring consistency with both on-road driving and virtual fleet testing.

A Comprehensive and Diverse Platform

The NVIDIA DRIVE Sim platform taps into the computing horsepower of NVIDIA RTX GPUs to deliver a revolutionary, scalable, cloud-based computing platform, capable of generating billions of qualified miles for autonomous vehicle testing.

It’s open, meaning both users and partners can incorporate their own models in simulation for comprehensive and diverse driving scenarios.

dSPACE chose to integrate its vehicle dynamics ASM with DRIVE Sim due to its ability to scale for a wide range of testing conditions. When running on the NVIDIA DRIVE Constellation platform, it can perform both software-in-the-loop and hardware-in-the-loop testing, which includes the in-vehicle AV computer controlling the vehicle in the simulation process. dSPACE’s broad expertise and long track-record in hardware-in-the-loop simulation make for a seamless implementation of ASM on DRIVE Constellation.

Learn more about the dSPACE ASM vehicle dynamics in the DRIVE Sim platform at the company’s upcoming GTC sessionregister before Sept. 25 to receive Early Bird pricing.

The post Modeled Behavior: dSPACE Introduces High-Fidelity Vehicle Dynamics Simulation on NVIDIA DRIVE Sim appeared first on The Official NVIDIA Blog.

Read More