Europe Launches New Era in HPC with World’s Fastest AI Supercomputer

Europe Launches New Era in HPC with World’s Fastest AI Supercomputer

Four new supercomputers backed by a pan-European initiative will use NVIDIA’s data center accelerators, networks and software to advance AI and high performance computing.

They include one system dubbed Leonardo, unveiled today at Italy’s CINECA research center, using NVIDIA technologies to deliver the world’s most powerful AI system. The four mark the first of eight systems to be announced this year targeting spots among the world’s 50 most powerful computers.

Together, they’ll form a regional network, “an engine to power Europe’s data economy,” said EuroHPC, the group driving the effort, in a white paper outlining its goals.

The systems will apply AI and data analytics across scientific and commercial applications that range from fighting COVID-19 and climate change to the design of advanced airplanes, cars, drugs and materials.

Joining Leonardo are a wave of new AI supercomputers planned for the Czech Republic, Luxembourg and Slovenia that will act as national centers of competence, expanding skills and creating jobs.

NVIDIA GPUs, InfiniBand Power Latest Systems

All four supercomputers announced use NVIDIA Ampere architecture GPUs and NVIDIA Mellanox HDR InfiniBand networks to tap an ecosystem of hundreds of HPC and AI applications. Atos, an NVIDIA systems partner headquartered in France, will build three of the four systems; Hewlett Packard Enterprise will construct the fourth.

The new systems join 333 of the world’s TOP500 supercomputers powered by NVIDIA GPUs, networking or both.

NVIDIA GPUs accelerate 1,800 HPC applications, nearly 800 of them available today in the GPU application catalog and NGC, NVIDIA’s hub for GPU-optimized software.

The new systems all use HDR 200Gb/s InfiniBand for low latency, high throughput and in-network computing. It’s the latest version of InfiniBand, already powering supercomputers across Europe.

A Brief Tour of Europe’s Latest Supercomputers

Leonardo will be the world’s fastest AI supercomputer. Atos is harnessing nearly 14,000 NVIDIA Ampere architecture GPUs and HDR 200Gb/s InfiniBand networking to deliver a system with 10 exaflops of AI performance. It will use the InfiniBand Dragonfly+ network topology to deliver both flexibility and scalable performance.

Researchers at CINECA will apply that power to advance science, simulating planetary forces behind climate change and molecular movements inside a coronavirus. The center is perhaps best known for its work on Quantum Espresso, a suite of open source codes for modeling nanoscale materials for jobs such as engineering better batteries.

A new supercomputer in Luxembourg called MeluXina, also part of the EuroHPC network, will connect 800 NVIDIA A100 GPUs on HDR 200Gb/s InfiniBand links. The system, to be built by Atos and powered by green energy from wood waste, will pack nearly 500 petaflops of AI performance.

MeluXina will address commercial applications and scientific research. It plans to offer access to users leveraging HPC and AI to advance work in financial services as well as manufacturing and healthcare.

Eastern Europe Powers Up

The new Vega supercomputer at the Institute of Information Science in Maribor, Slovenia, (IZUM) will be based on the Atos BullSequana XH2000 system. The supercomputer, named after Slovenian mathematician Jurij Vega, includes 240 A100 GPUs and 1,800 HDR 200Gb/s InfiniBand end points.

Vega will help “ensure a new generation of experts and developers, as well as the wider Slovenian community, can meet new challenges within our national consortium and contribute to regional and European HPC initiatives,” said Aleš Bošnjak, IZUM’s director in a statement issued by EuroHPC.

EuroHPC map
A total of 32 countries are participating in the EuroHPC effort.

The IT4Innovations National Supercomputing Center will host what’s expected to become the most powerful supercomputer in the Czech Republic. It will use 560 NVIDIA A100 GPUs to deliver nearly 350 petaflops of AI performance — 7x the capabilities of the center’s existing system.

The supercomputer will be based on the HPE Apollo 6500 systems from Hewlett Packard Enterprise (HPE). It will serve researchers at the VSB – Technical University of Ostrava, where it’s based, as well as an expanding set of external academic and industrial users employing a mix of simulations, data analytics and AI.

The story of Europe’s ambitions in HPC and AI is still being written.

EuroHPC has yet to announce its plans for systems in Bulgaria, Finland, Portugal and Spain. And beyond that work, the group has already sketched out plans that stretch to 2027.

The post Europe Launches New Era in HPC with World’s Fastest AI Supercomputer appeared first on The Official NVIDIA Blog.

Read More

AI Draws World’s Smallest Wanted Posters to Apprehend COVID

AI Draws World’s Smallest Wanted Posters to Apprehend COVID

Using AI and a supercomputer simulation, Ken Dill’s team drew the equivalent of wanted posters for a gang of proteins that make up COVID-19. With a little luck, one of their portraits could identify a way to arrest the coronavirus with a drug.

When the pandemic hit, “it was terrible for the world, and a big research challenge for us,” said Dill, who leads the Laufer Center for Physical & Quantitative Biology at Stony Brook University, in Long Island, New York.

For a decade, he helped the center assemble the researchers and tools needed to study the inner workings of proteins — complex molecules that are fundamental to cellular life. The center has a history of applying its knowledge to viral proteins, helping others identify drugs to disable them.

“So, when the pandemic came, our folks wanted to spring into action,” he said.

AI, Simulations Meet at the Summit

The team aimed to use a combination of physics and AI tools to predict the 3D structure of more than a dozen coronavirus proteins based on lists of the amino acid strings that define them. It won a grant for time on the IBM-built Summit supercomputer at Oak Ridge National Laboratory to crunch its complex calculations.

“We ran 30 very extensive simulations in parallel, one on each of 30 GPUs, and we ran them continuously for at least four days,” explained Emiliano Brini, a junior fellow at the Laufer Center. “Summit is a great machine because it has so many GPUs, so we can run many simulations in parallel,” he said.

“Our physics-based modeling eats a lot of compute cycles. We use GPUs almost exclusively for their speed,” said Dill.

Sharing Results to Help Accelerate Research

Thanks to the acceleration, the predictions are already in. The Laufer team quickly shared them with about a hundred researchers working on a dozen separate projects that conduct painstakingly slow experiments to determine the actual structure of the proteins.

“They indicated some experiments could be done faster if they had hunches from our work of what those 3D structures might be,” said Dill.

Now it’s a waiting game. If one of the predictions gives researchers a leg up in finding a weakness that drug makers can exploit, it would be a huge win. It could take science one step closer to putting a general antiviral drug on the shelf of your local pharmacy.

Melding Machine Learning and Physics

Dill’s team uses a molecular dynamics program called MELD. It blends physical simulations with insights from machine learning based on statistical models.

AI provides MELD key information to predict a protein’s 3D structure from its sequence of amino acids. It quickly finds patterns across a database of atomic-level information on 200,000 proteins gathered over the last 50 years.

MELD uses this information in compute-intensive physics simulations to determine the protein’s detailed structure. Further simulations then can predict, for example, what drug molecules will bind tightly to a specific viral protein.

“So, both these worlds — AI inference and physics simulations — are playing big roles in helping drug discovery,” said Dill. “We get the benefits of both methods, and that combination is where I think the future is.”

MELD runs on CUDA, NVIDIA’s accelerated computing platform for GPUs. “It would take prohibitively long to run its simulations on CPUs, so the majority of biological simulations are done on GPUs,” said Brini.

Playing a Waiting Game

The COVID-19 challenge gave Laufer researchers with a passion for chemistry a driving focus. Now they await feedback on their work on Summit.

“Once we get the results, we’ll publish what we learn from the mistakes. Many times, researchers have to go back to the drawing board,” he said.

And every once in a while, they celebrate, too.

Dill hosted a small, socially distanced gathering for a half-dozen colleagues in his backyard after the Summit work was complete. If those results turn up a win, there will be a much bigger celebration extending far beyond the Stony Brook campus.

The post AI Draws World’s Smallest Wanted Posters to Apprehend COVID appeared first on The Official NVIDIA Blog.

Read More

How GPUs Are Helping Paris’ Public Hospital System Combat the Spread of COVID-19

How GPUs Are Helping Paris’ Public Hospital System Combat the Spread of COVID-19

In the battle against COVID-19, Greater Paris University Hospitals – Public Assistance Hospital of Paris (AP-HP is the French acronym) isn’t just on the medical front lines — it’s on the data front lines as well.

With a network of 39 hospitals treating 8.3 million patients each year, AP-HP is a major actor in the fight against COVID-19.

Along with its COVID-19 cases comes an awful lot of data, including now geodata that can potentially help lessen the impact of the pandemic. AP-HP, which partners with seven universities, already had the ability to analyze large amounts of medical data. It had previously created dashboards that combined cancer cases and geodata. So, it was logical to pursue and extend its role during the pandemic.

The expected volume of COVID-19 data and geodata would probably have tested AP-HP’s data crunching capacity. To mitigate this critical challenge, the hospital’s information systems administrators turned to Kinetica, a provider of streaming data warehouses and real-time analytics and a member of the NVIDIA Inception program for AI startups.

Kinetica’s offering harnesses the power of NVIDIA GPUs to quickly convert case location data into usable intelligence. And in the fight against COVID-19, speed is everything.

The project team also used NVIDIA RAPIDS to speed up the machine learning algorithms integrated into the platform. RAPIDS accelerates analytics and data science pipelines on NVIDIA GPUs by taking advantage of GPU parallelism and high memory bandwidth.

“Having the ability to perform this type of analysis in real time is really important during a pandemic,” said Hector Countouris, the project lead at AP-HP. “And more data is coming.”

Analyzing COVID Contact Data

What Countouris and his colleagues are most focused on is using COVID-related geodata to understand where virus “hot spots” are and the dynamic of the outbreak. Looking for cluster locations can help decision-making at the district or region level.

In addition, they’re looking at new signals to improve early detection of COVID patients. This includes working with data from other regional agencies.

If patients are diagnosed with COVID, they’ll be asked by the relevant agencies via a phone call about their recent whereabouts and contacts to help with contact tracing. This is the first time that a wide range of data from different partners in the Paris area will be integrated to allow for contact tracing and timely alerts about a potential exposure. The result will be a newfound ability to see how clusters of COVID-19 cases evolve.

“We hope that in the near future we will be able to follow how a cluster evolves in real time,” said Countouris.

The goal is to enable public health decision-makers to implement prevention and control measures and assess their effectiveness. The data can also be integrated with other demographic data to study the viral spread and its possible dependency on socio-economics and other factors.

Attacking Bottlenecks with GPUs

Prior to engaging with Kinetica, such data-intensive projects involved so much time for loading the data that they couldn’t be analyzed quickly enough to deliver real-time benefits.

“Now, I don’t feel like I have a bottleneck,” said Countouris. “We are continuously integrating data and delivering dashboards to decision makers within hours. And with robust real-time pipelines allowing for continuous data ingestion, we can now focus on building better dashboards.”

In the past, to get data in a specific and usable format, they would need to do a lot of pre-processing. With Kinetica’s Streaming Data Warehouse powered by NVIDIA V100 Tensor Core GPUs, that’s no longer the case. Users can access the much richer datasets they demand.

Kinetica’s platform is available on NVIDIA NGC, a catalog of GPU-optimized AI containers that let enterprises quickly operationalize extreme analytics, machine learning and data visualization. This eliminates complexity and lets organizations deploy cloud, on-premises or hybrid models for optimal business operations.

“I don’t think we could meet user expectations for geodata without GPU power,” he said. “There is just too much data and geodata to provide for too many users at the same time.”

AP-HP’s COVID-related work has already built a foundation upon which to do follow-up work related to emergency responses in general. The hospital information system’s interest for that kind of data is far from over.

“The fact that we helped the decision-making process and that officials are using our data is the measure of success,” said Countouris. “We have a lot to do. This is only the beginning.”

Countouris presented the team’s work last week at the GPU Technology Conference. Registered GTC attendees can view the talk on demand. It will be available for replay to the general public early next month.

Kinetica will also be part of the NVIDIA Startup Village Booth at the HLTH conference, presenting on Oct. 16 at 2 p.m. Pacific time.

The post How GPUs Are Helping Paris’ Public Hospital System Combat the Spread of COVID-19 appeared first on The Official NVIDIA Blog.

Read More

At GTC, Educators and Leaders Focus on Equity in AI, Developer Diversity

At GTC, Educators and Leaders Focus on Equity in AI, Developer Diversity

Not everyone needs to be a developer, but everyone will need to be an AI decision maker.

That was the message behind a panel discussion on Advancing Equitable AI, which took place at our GPU Technology Conference last week. It was one of several GTC events advancing the conversation on diversity, equity and ethics in AI.

This year, we strengthened our support for women and underrepresented developers and scientists at GTC by providing conference passes to members of professional organizations supporting women, Black and Latino developers. Professors at historically Black colleges and universities — including Prairie View A&M University, Hampton University and Jackson State University — as well as groups like Black in AI and LatinX in AI received complimentary access to training from the NVIDIA Deep Learning Institute.

A Forbes report last year named GTC as one of the U.S.’s top conferences for women to attend to further their careers in AI. At this month’s event, women made up better than one in five registered attendees — doubling last year’s count and an almost 4x increase since 2017 — and more than 100 of the speakers.

And in a collaboration with the National Society of Black Engineers that will extend beyond GTC, we created opportunities for the society’s collegiate and professional developers to engage with NVIDIA’s recruiting team, which provided guidance on navigating the new world of virtual interviewing and networking.

“We’re excited to be embarking on a partnership with NVIDIA,” said Johnnie Tangle, national finance chairman of NSBE Professionals. “Together, we are both on the mission of increasing the visibility of Blacks in development and showing why diversity in the space enhances the community as a whole.”

Panel Discussions: Paving Pathways for Equitable AI

Two power-packed, all-female panels at GTC focused on a roadmap for responsible and equitable AI.

In a live session that drew over 250 attendees, speakers from the University of Florida, the Boys and Girls Club of Western Pennsylvania and AI4All — a nonprofit working to increase diversity and inclusion in AI — discussed the importance of AI exposure and education for children and young adults from underrepresented groups.

When a broader group of young people has access to AI education, “we naturally see a way more diverse and interesting set of problems being addressed,” said Tess Posner, CEO of AI4All, “because young people and emerging leaders in the field are going to connect the technology to a problem they’ve seen in their own lives, in their own experience or in their communities.”

The conversation also covered the role parents and schools play in fostering awareness and exposure to STEM subjects in their children’s schools, as well as the need for everyone — developers or not — to have a foundational understanding of how AI works.

“We want students to be conscious consumers, and hopefully producers,” said Christina Gardner-McCune, associate professor and director of the Engaging Learning Lab at the University of Florida, and co-chair of the AI4K12 initiative. “Everybody is going to be making decisions about what AI technologies are used in their homes, what AI technologies their children interact with.”

Later in the week, a panel titled “Aligning Around Common Values to Advance AI Policy” explored ideas to pave the way for responsible AI on a global scale.

The webinar featured representatives from the U.S. National Institute of Standards and Technology, Scotland-based innovation center The Data Lab, and C Minds, a think tank focused on AI initiatives in Latin America. Speakers shared their priorities for developing trustworthy AI, and defined what success would like to them five years in the future.

Dinner with Strangers: Developer Diversity in AI

In a virtual edition of the popular Dinner with Strangers networking events at GTC, experts from NVIDIA and NSBE partnered to moderate two conversations with GTC attendees. NVIDIA employees shared their experiences and tips with early-career attendees, offering advice on how to build a personal brand in a virtual world, craft a resume and prepare for interviews.

For more about GTC, watch NVIDIA founder and CEO Jensen Huang’s keynote below.

The post At GTC, Educators and Leaders Focus on Equity in AI, Developer Diversity appeared first on The Official NVIDIA Blog.

Read More

Lilt CEO Spence Green Talks Removing Language Barriers in Business

Lilt CEO Spence Green Talks Removing Language Barriers in Business

When large organizations require translation services, there’s no room for the amusing errors often produced by automated apps. That’s where Lilt, an AI-powered enterprise language translation company, comes in.

Lilt CEO Spence Green spoke with AI Podcast host Noah Kravitz about how the company is using a human-in-the-loop process to achieve fast, accurate and affordable translation.

Lilt does so with a predictive typing software, in which professional translators receive AI-based suggestions of how to translate content. By relying on machine assistance, Lilt’s translations are efficient while retaining accuracy.

However, including people in the company’s workflow also makes localization possible. Professional translators use cultural context to take direct translations and adjust phrases or words to reflect the local language and customs.

Lilt currently supports translations of 45 languages, and aims to continue improving its AI and make translation services more affordable.

Key Points From This Episode:

  • Green’s experience living in Abu Dhabi was part of the inspiration behind Lilt. While there, he met a man, an accountant, who had immigrated from Egypt. When asked why he no longer worked in accounting, the man explained that he didn’t speak English, and accountants who only spoke Arabic were paid less. Green didn’t want the difficulty of adult language learning to be a source of inequality in a business environment.
  • Lilt was founded in 2015, and evolved from a solely software company into a software and services business. Green explains the steps it took for the company to manage translators and act as a complete solution for enterprises.

Tweetables:

“We’re trying to provide technology that’s going to drive down the cost and increase the quality of this service, so that every organization can make all of its information available to anyone.” — Spence Green [2:53]

“One could argue that [machine translation systems] are getting better at a faster rate than at any point in the 70-year history of working on these systems.” — Spence Green [14:01]

You Might Also Like:

Hugging Face’s Sam Shleifer Talks Natural Language Processing

Hugging Face is more than just an adorable emoji — it’s a company that’s demystifying AI by transforming the latest developments in deep learning into usable code for businesses and researchers, explains research engineer Sam Shleifer.

Credit Check: Capital One’s Kyle Nicholson on Modern Machine Learning in Finance

Capital One Senior Software Engineer Kyle Nicholson explains how modern machine learning techniques have become a key tool for financial and credit analysis.

A Conversation with the Entrepreneur Behind the World’s Most Realistic Artificial Voices

Voice recognition is one thing, creating natural sounding artificial voices is quite another. Lyrebird co-founder Jose Solero speaks about how the startup is using deep learning to create a system that’s able to listen to human voices and generate speech mimicking the original human speaker.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post Lilt CEO Spence Green Talks Removing Language Barriers in Business appeared first on The Official NVIDIA Blog.

Read More

On Cloud Mine: Lenovo, Microsoft and NVIDIA Bring Cloud Computing on Premises with Azure Stack Hub

On Cloud Mine: Lenovo, Microsoft and NVIDIA Bring Cloud Computing on Premises with Azure Stack Hub

The popularity of public cloud offerings is evident — just look at how top cloud service providers report double-digit growth year over year.

However, application performance requirements and regulatory compliance issues, to name two examples, often require data to be stored locally to reduce distance and latency and to place data entirely within a company’s control. In these cases, standard private clouds also may offer less flexibility, agility or on-demand capacity. Lenovo ThinkAgile SX

To help resolve these issues, Lenovo, Microsoft and NVIDIA have engineered a hyperconverged hybrid cloud that enables Azure cloud services within an organization’s data center.

By integrating Lenovo ThinkAgile SX, Microsoft Azure Stack Hub and NVIDIA Mellanox networking, organizations can deploy a turnkey, rack-scale cloud that’s optimized with a resilient, highly performant and secure software-defined infrastructure.

Fully Integrated Azure Stack Hub Solution

Lenovo ThinkAgile SX for Microsoft Azure Stack Hub satisfies regulatory compliance and removes performance concerns. Because all data is kept on secure servers in a customer’s data center, it’s much simpler to comply with the governance laws of a country and implement their own policies and practices.

Similarly, by reducing the distance that data must travel, latency is reduced and application performance goals can be more easily achieved. At the same time, customers can cloud-burst some workloads to the Microsoft Azure public cloud, if desired.

Lenovo, Microsoft and NVIDIA worked together to make sure everything performs right out of the box. There’s no need to worry about configuring and adjusting settings for virtual or physical infrastructure.

The power and automation of Azure Stack Hub software, the convenience and reliability of Lenovo’s advanced servers, and the high performance of NVIDIA networking combine to enable an optimized hybrid cloud. Offering the automation and flexibility of Microsoft Azure Cloud with the security and performance of on-premises infrastructure, it’s an ideal platform to:

  • deliver Azure cloud services from the security of your own data center,
  • enable rapid development and iteration of applications with on-premises deployment tools,
  • unify application development across entire hybrid cloud environments, and
  • easily move applications and data across private and public clouds.

Agility of a Hybrid Cloud 

Azure Stack Hub also seamlessly operates with Azure, delivering an orchestration layer that enables the movement of data and applications to the public cloud. This hybrid cloud protects the data and applications that need protection and offers lower latencies for accessing data. And it still provides the public cloud benefits organizations may need, such as reduced costs, increased infrastructure scalability and flexibility, and protection from data loss.

A hybrid approach to cloud computing keeps all sensitive information onsite and often includes centrally used applications that may have some of this data tied to them. With a hybrid cloud infrastructure in place, IT personnel can focus on building proficiencies in deploying and operating cloud services — such as IaaS, PaaS and SaaS — and less on managing infrastructure.

Network Performance 

A hybrid cloud requires a network that can handle all data communication between clients, servers and storage. The Ethernet fabric used for networking in the Lenovo ThinkAgile SX for Microsoft Azure Stack Hub leverages NVIDIA Mellanox Spectrum Ethernet switches — powered by the industry’s highest-performing ASICs — along with NVIDIA Cumulus Linux, the most advanced open network operating system.

At 25Gb/s data rates, these switches provide cloud-optimized delivery of data at line-rate. Using a fully shared buffer, they support fair bandwidth allocation and provide predictably low latency, as well as traffic flow prioritization and optimization technology to deliver data without delays, while the hot-swappable redundant power supplies and fans help provide resiliency for business-sensitive traffic.

Modern networks require advanced offload capabilities, including remote direct memory access (RDMA), TCP, overlay networks (for example, VXLAN and Geneve) and software-defined storage acceleration. Implementing these at the network layer frees expensive CPU cycles for user applications while improving the user experience.

To handle the high-speed communications demands of Azure Stack Hub, Lenovo configured compute nodes with a dual-port 10/25/100GbE NVIDIA Mellanox ConnectX-4 Lx, ConnectX-5 or ConnectX-6 Dx NICs. The ConnectX NICs are designed to address cloud, virtualized infrastructure, security and network storage challenges. They use native hardware support for RoCE, offer stateless TCP offloads, accelerate overlay networks and support NVIDIA GPUDirect technology to maximize performance of AI and machine learning workloads. All of this results in much needed higher infrastructure efficiency.

RoCE for Improved Efficiency 

Microsoft Azure Stack Hub leverages Storage Spaces Direct (S2D) and Microsoft’s Server Message Block Direct 3.0. SMB Direct uses high-speed RoCE to transfer large amounts of data with little CPU intervention. SMB Multichannel allows servers to simultaneously use multiple network connections and provide fault tolerance through the automatic discovery of network paths.

The addition of these two features allows NVIDIA RoCE-enabled ConnectX Ethernet NICs to deliver line-rate performance and optimize data transfer between server and storage over standard Ethernet. Customers with Lenovo ThinkAgile SX servers or the Lenovo ThinkAgile SX Azure Hub can deploy storage on secure file servers while delivering the highest performance. As a result, S2D is extremely fast with disaggregated file server performance, almost equaling that of locally attached storage.

microsoft-storage
Testing performed by Microsoft shows NVIDIA Networking RoCE offloads improve S2D performance and CPU efficiency.

Run More Workloads

By using intelligent hardware accelerators and offloads, the NVIDIA RoCE-enabled NICs offload I/O tasks from the CPU, freeing up resources to accelerate application performance instead of making data wait for the attention of a busy CPU.

The result is lower latencies and an improvement in CPU efficiencies. This maximizes the performance in Microsoft Azure Stack deployments by leaving the CPU available to run other application processes. Efficiency gets a boost since users can host more VMs per physical server, support more VDI instances and complete SQL Server queries more quickly.

network offload chart
Using the offloads in NVIDIA ConnectX NICs frees up CPU cores to support more users and more applications, improving server efficiency.

A Transformative Experience with a ThinkAgile Advantage

Lenovo ThinkAgile solutions include a comprehensive portfolio of software and services that supports the full lifecycle of infrastructure. At every stage — planning, deploying, supporting, optimizing and end-of-life — Lenovo provides the expertise and services needed to get the most from technology investments.

This includes single-point-of-contact support for all the hardware and software used in the solution, including Microsoft’s Azure Stack Hub and the ConnectX NICs. Customers never have to worry about who to call — Lenovo takes calls and drives them to resolution.

Learn more about Lenovo ThinkAgile SX for Microsoft Azure Stack Hub with NVIDIA Mellanox networking.

The post On Cloud Mine: Lenovo, Microsoft and NVIDIA Bring Cloud Computing on Premises with Azure Stack Hub appeared first on The Official NVIDIA Blog.

Read More

Turn Your Radio On: NVIDIA Engineer Creates COVID-Safe Choirs in Cars

Turn Your Radio On: NVIDIA Engineer Creates COVID-Safe Choirs in Cars

Music and engineering were completely separate parts of Bryce Denney’s life until the pandemic hit.

By day, the Massachusetts-based NVIDIA engineer helped test processors. On nights and weekends, he played piano chamber music and accompanied theater troupes that his wife, Kathryn, sang in or led.

It was a good balance for someone who graduated with a dual major in physics and piano performance.

Once COVID-19 arrived, “we had to take the calendar off the wall — it was too depressing to look at everything that was canceled,” Bryce said.

“I had this aimless sense of not feeling sure who I was anymore,” said Kathryn, a former public school teacher who plays French horn and conducts high school and community theater groups. “This time last year I was working in five shows and a choir that was preparing for a tour of Spain,” she said.

That’s when Bryce got an idea for some musical engineering.

Getting Wired for Sound

He wanted to help musicians in separate spaces hear each other without nagging delays. As a proof of concept, he ran cables for mics and headphones from a downstairs piano to an upstairs bedroom where Kathryn played her horn.

The duet’s success led to convening a quartet from Kathryn’s choir in the driveway, singing safely distanced in separate cars with wired headsets linked to a small mixer. The Driveway Choir was born.

Driveway choir singers in Massachusetts.
Driveway choir singers harmonize over an FM radio connection.

“We could hear each other breathe and we joked back and forth,” said Kathryn.

“It was like an actual rehearsal again and so much more rewarding” than Zoom events or virtual choirs where members recorded one part at a time and mixed them together, said Bryce.

But it would take a rat’s nest of wires to link a full choir of 20 singers, so Bryce boned up on wireless audio engineering.

Physics to the Rescue

He reached out to people like David Newman, a voice teacher at James Madison University, who was also experimenting with choirs in cars. And he got tips about wireless mics that are inexpensive and easy to clean.

Newman and others coached him on broadcasting over FM frequencies, and how to choose bands to avoid interference from local TV stations.

“It was an excuse to get into physics again,” said Bryce.

Within a few weeks he assembled a system and created a site for his driveway choir, where he posted videos of events, a spreadsheet of options for configuring a system, and packing lists for how to take it on the road. A basic setup for 16 singers costs $1,500 and can scale up to accommodate 24 voices.

“Our goal is to make this accessible to other groups, so we choose less-expensive equipment and write out a step-by-step process,” said Kathryn, who has helped organize 15 events using the gear.

Jan Helbers, a neighbor with wireless expertise, chipped in by designing an antenna distribution system that can sit on top of a car on rainy days.

Bryce posted instructions on how to build it for about $300 complete with a bill of materials and pictures of the best RF connectors to use. A commercial antenna distribution system of this size would cost thousands.

“I was excited about that because here in Marlborough it will be snowy soon and we want to keep singing,” said Bryce.

From Alaska to the Times

The Denneys helped the choir at St. Anne’s Episcopal church in nearby Lincoln, Mass., have its first live rehearsal in four months and record a track used in a Sunday service. Now the choir is putting together its own system.

The church is one of at least 10 groups that have contacted the Denneys about creating driveway choirs of their own, including one in Alaska. They expect more calls after a New York Times reporter joined one of their recent events and wrote a story about his experience.

There’s no shortage of ideas for what’s next. Driveway choirs for nursing homes, singalongs in big mall parking lots or drive-in theaters, Christmas caroling for neighbors.

“I wouldn’t be surprised if we did a Messiah sing,” said Kathryn, who has been using some of her shelter-in-place time to start writing a musical called Connected.

“I think about that song, ‘How Can I Keep from Singing,’ that’s the story of our lives,” she said.

Driveway choir gear
Basic gear for a driveway choir.

At top: Kathryn conducts and Bryce plays piano at a Driveway Choir event with 24 singers in Concord, MA.

The post Turn Your Radio On: NVIDIA Engineer Creates COVID-Safe Choirs in Cars appeared first on The Official NVIDIA Blog.

Read More

AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence

AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence

The upcoming election isn’t the only thing on lawmakers’ minds. Several congressional representatives have been grappling with U.S. AI policy for years, and their work is getting closer to being made into law.

The issue is one of America’s greatest challenges and opportunities: What should the U.S. do to harness AI for the public good, to benefit citizens and companies, and to extend the nation’s prosperity?

At the GPU Technology Conference this week, a bipartisan panel of key members of Congress on AI joined Axios reporter Erica Pandey for our AI for America panel to explore their strategies. Representatives Robin Kelly of Illinois, Will Hurd of Texas and Jerry McNerney of California discussed the immense opportunities of AI, as well as challenges they see as policymakers.

The representatives’ varied backgrounds gave each a unique perspective. McNerney, who holds a Ph.D. in mathematics, considers AI from the standpoint of science and technology. Hurd was a CIA agent and views it through the lens of national security. Kelly is concerned about the impact of AI on the community, jobs and income.

All agreed that the federal government, private sector and academia must work together to ensure that the United States continues to lead in AI. They also agree that AI offers enormous benefits for American companies and citizens.

McNerny summed it up by saying: “AI will affect every part of American life.”

Educate the Public, Educate Ourselves

Each legislator recognized how AI will be a boon for everything from sustainable agriculture to improving the delivery of citizen services. But these will only become reality with support from the public and elected officials.

Kelly emphasized the importance of education — to overcome fear and give workers new skills. Noting that she didn’t have a technical background, she said she considers the challenge from a different perspective than developers.

“We have to educate people and we have to educate ourselves,” she said. “Each community will be affected differently by AI. Education will allay a lot of fears.”

All three agreed that the U.S. federal government, academia and the private sector must collaborate to create this cultural shift. “We need a massive investment in AI education,” said McNerney, who detailed some of the work done at the University of the Pacific to create AI curricula.

Hurd urged Congress to reevaluate and update existing educational programs, making it more flexible to develop programming and data science skills instead of focusing on a full degree. He said we need to “open up federal data for everyone to utilize and take advantage.”

The panel raised other important needs, such as bringing computer science classes into high schools across the country and training federal workers to build AI into their project planning.

Roadmap to a More Efficient Future

Many Americans may not be aware that AI is already a part of their daily lives. The representatives offered some examples, including how AI is being used to maximize crop yields by crunching data on soil characteristics, weather and water consumption.

Hurd and Kelly have been focused on AI policy for several years. Working with the Bipartisan Policy Center, they developed the National Strategy on AI, a policy framework that lays out a strategy for the U.S. to accelerate AI R&D and adoption.

They introduced a resolution, backed by a year of work and four white papers, that calls for investments to make GPUs and other computing resources available, strengthening international cooperation, increasing funding for R&D, building out workforce training programs, and developing AI in an ethical way that reduces bias and protects privacy.

Ned Finkle, vice president of government affairs at NVIDIA, voiced support for the resolution, noting that the requirements for AI are steep.

“AI requires staggering amounts of data, specialized training and massive computational resources,” he said. “With this resolution, Representatives Hurd and Kelly are presenting a solid framework for urgently needed investments in computing power, workforce training, AI curriculum development and data resources.”

McNerney is also working to spur AI development and adoption. His AI in Government Act, which would direct federal agencies to develop plans to adopt AI and evaluate resources available to academia, has passed the House of Representatives and is pending with the Senate.

As their legislation moves forward, the representatives encourage industry leaders to provide input and support their efforts. They urged those interested to visit their websites and reach out.

The post AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence appeared first on The Official NVIDIA Blog.

Read More

Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks

Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks

In a conversation that ranged from the automation of software to holodeck-style working environments, NVIDIA CEO and founder Jensen Huang explained why now is the moment to back a new generation of startups as part of this week’s GPU Technology Conference.

Jeff Herbst, vice president of business development and head of the NVIDIA Inception startup accelerator program, moderated the panel, which included CrowdAI CEO Devaki Raj and Babble Labs CEO Chris Rowen.

“AI is going to create a new set of opportunities, because all of a sudden software that wasn’t writable in the past, or we didn’t know how to write in the past, we now have the ability to write,” Huang said.

The conversation comes after Huang revealed on Monday that NVIDIA Inception, which nurtures startups revolutionizing industries with AI and data science, had grown to include more than 6,500 companies.

Another change, Huang envisioned workplaces transformed by automation, thanks to AI and robots of all kinds. When asked, by Rowen, about the future of NVIDIA’s own campus, Huang said NVIDIA’s building a real-life holodeck.

One day, these will allow employees from all over the world to work together. “People at home will be in VR, while people at the office will be on the holodeck,” Huang said.

Huang said he sees NVIDIA first building one. “Then I would like to imagine our facilities having 10 to 20 of these new holodecks,” he said.

More broadly, AI, Huang explained, will allow organizations of all kinds to turn their data, and their knowledge base, into powerful AI. NVIDIA will play a role as an enabler, giving companies the tools to transition to a new kind of computing.

He described AI as the “automation of automation” and “software writing software.” This gives the vast majority of the world’s population who aren’t coders new capabilities. “In a lot of ways, AI is the best way to democratize computer science,” Huang said.

For more from Huang, Herbst, Raj and Rowan, register for GTC and watch a replay of the conversation. The talk will be available for viewing by the general public in 30 days.

The post Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks appeared first on The Official NVIDIA Blog.

Read More

Why Retailers Are Investing in Edge Computing and AI

Why Retailers Are Investing in Edge Computing and AI

AI is a retailer’s automated helper, acting as a smart assistant to suggest the perfect position for products in stores, accurately predict consumer demand, automate order fulfillment in warehouses, and much more.

The technology can help retailers grow their top line, potentially improving net profit margins from 2 percent to 6 percent — and adding $1 trillion in profits to the industry globally — according to McKinsey Global Institute analysis.

It can also help them hold on to more of what they already have by reducing shrinkage — the loss of inventory due to theft, shoplifting, ticket switching at self-checkout lanes, etc. — which costs retailers $62 billion annually, according to the National Retail Federation.

For retailers, the ability to deploy, manage and scale AI across their entire distributed edge infrastructure using a single, unified platform is critical. Managing these many devices is no small feat for IT teams as the process can be time-consuming, expensive and complex.

NVIDIA is working with retailers, software providers and startups to create an ecosystem of AI applications for retail, such as intelligent stores, forecasting, conversational AI and recommendation systems, that help retailers pull real-time insights from their data to provide a better shopping experience for their customers.

Smart Retail Managed Remotely at the Edge

The NVIDIA EGX edge AI platform makes it easy to deploy and continuously update AI applications in distributed stores and warehouses. It combines GPU-accelerated edge servers, a cloud-native software stack and containerized applications in NVIDIA NGC, a software catalog that offers a range of industry-specific AI toolkits and pre-trained models.

To provide customers a unified control plane through which to manage their AI infrastructure, NVIDIA announced this week during the GPU Technology Conference a new hybrid-cloud platform called NVIDIA Fleet Command.

Fleet Command centralizes the management of servers spread across vast areas. It offers one-touch provisioning, over-the-air software updates, remote management and detailed monitoring dashboards to make it easier for operational teams to reduce the burden of IT to get the most out of their AI applications. Early access to Fleet Command is open now.

KION Group Pursues One-Touch Intelligent Warehouse Deployment

KION Group, a global supply chain solutions provider, is looking to use Fleet Command to securely deploy and update their applications through a unified control plane, from anywhere, at any time. They are using the NVIDIA EGX AI platform to develop AI applications for its intelligent warehouse systems, increasing the throughput and efficiency in its more than 6,000 retail distribution centers.

The following demo shows how Fleet Command helps KION Group simplify the deployment and management of AI at the edge — from material handling to autonomous forklifts to pick-and-place robotics.

Everseen Scales Asset Protection & Perpetual Inventory Accuracy with Edge AI

Everseen’s AI platform, deployed in many retail stores and distribution centers, uses advanced machine learning, computer vision and deep learning to bring real-time insights to retailers for asset protection and to streamline distribution system processes.

The platform is optimized on NVIDIA T4 Tensor Core GPUs using NVIDIA TensorRT software, resulting in 10x higher inference compute at the edge. This enables Everseen’s customers to reduce errors and shrinkage in real time for faster customer checkout and to optimize operations in distribution centers.

Everseen is using the EGX platform and Fleet Command to simplify and scale their deployment and to update their AI applications on servers across hundreds of retail stores and distribution centers. As AI algorithms retrain and improve accuracy with new metadata, updated applications can be securely updated and deployed over the air on hundreds of servers.

Deep North Delivers Transformative Insights with In-Store Analytics

Retailers use Deep North’s AI platform to digitize their shopping locations, analyze anonymous shopper behavior inside stores and conduct visual analytics. The platform gives retailers the ability to predict and adapt to consumer behavior in their commercial spaces and optimize store layout and staffing in high-traffic aisles

The company uses NVIDIA EGX to simplify AI deployment, server management and device orchestration. With EGX, AI computations are performed at the edge entirely in stores, delivering real-time notifications to store associates for better inventory management and optimized staffing.

By optimizing its intelligent video analytics applications on NVIDIA T4 GPUs with the NVIDIA Metropolis application framework, Deep North has seen orders-of-magnitude improvement in edge compute performance while delivering real-time insights to customers.

Growing AI Opportunities for Retailers

The NVIDIA EGX platform and Fleet Command deliver accelerated, secure AI computing to the edge for retailers today. And a growing number of them are applying GPU computing, AI, robotics and simulation technologies to reinvent their operations for maximum agility and profitability.

To learn more, check out my session on “Driving Agility in Retail with AI” at GTC. Explore how NVIDIA is leveraging AI in retail through GPU-accelerated containers, deep learning frameworks, software libraries and SDKs. And watch how NVIDIA AI is transforming everyday retail experiences:

Also watch NVIDIA CEO Jensen Huang recap all the news at GTC: 

The post Why Retailers Are Investing in Edge Computing and AI appeared first on The Official NVIDIA Blog.

Read More