On Cloud Mine: Lenovo, Microsoft and NVIDIA Bring Cloud Computing on Premises with Azure Stack Hub

On Cloud Mine: Lenovo, Microsoft and NVIDIA Bring Cloud Computing on Premises with Azure Stack Hub

The popularity of public cloud offerings is evident — just look at how top cloud service providers report double-digit growth year over year.

However, application performance requirements and regulatory compliance issues, to name two examples, often require data to be stored locally to reduce distance and latency and to place data entirely within a company’s control. In these cases, standard private clouds also may offer less flexibility, agility or on-demand capacity. Lenovo ThinkAgile SX

To help resolve these issues, Lenovo, Microsoft and NVIDIA have engineered a hyperconverged hybrid cloud that enables Azure cloud services within an organization’s data center.

By integrating Lenovo ThinkAgile SX, Microsoft Azure Stack Hub and NVIDIA Mellanox networking, organizations can deploy a turnkey, rack-scale cloud that’s optimized with a resilient, highly performant and secure software-defined infrastructure.

Fully Integrated Azure Stack Hub Solution

Lenovo ThinkAgile SX for Microsoft Azure Stack Hub satisfies regulatory compliance and removes performance concerns. Because all data is kept on secure servers in a customer’s data center, it’s much simpler to comply with the governance laws of a country and implement their own policies and practices.

Similarly, by reducing the distance that data must travel, latency is reduced and application performance goals can be more easily achieved. At the same time, customers can cloud-burst some workloads to the Microsoft Azure public cloud, if desired.

Lenovo, Microsoft and NVIDIA worked together to make sure everything performs right out of the box. There’s no need to worry about configuring and adjusting settings for virtual or physical infrastructure.

The power and automation of Azure Stack Hub software, the convenience and reliability of Lenovo’s advanced servers, and the high performance of NVIDIA networking combine to enable an optimized hybrid cloud. Offering the automation and flexibility of Microsoft Azure Cloud with the security and performance of on-premises infrastructure, it’s an ideal platform to:

  • deliver Azure cloud services from the security of your own data center,
  • enable rapid development and iteration of applications with on-premises deployment tools,
  • unify application development across entire hybrid cloud environments, and
  • easily move applications and data across private and public clouds.

Agility of a Hybrid Cloud 

Azure Stack Hub also seamlessly operates with Azure, delivering an orchestration layer that enables the movement of data and applications to the public cloud. This hybrid cloud protects the data and applications that need protection and offers lower latencies for accessing data. And it still provides the public cloud benefits organizations may need, such as reduced costs, increased infrastructure scalability and flexibility, and protection from data loss.

A hybrid approach to cloud computing keeps all sensitive information onsite and often includes centrally used applications that may have some of this data tied to them. With a hybrid cloud infrastructure in place, IT personnel can focus on building proficiencies in deploying and operating cloud services — such as IaaS, PaaS and SaaS — and less on managing infrastructure.

Network Performance 

A hybrid cloud requires a network that can handle all data communication between clients, servers and storage. The Ethernet fabric used for networking in the Lenovo ThinkAgile SX for Microsoft Azure Stack Hub leverages NVIDIA Mellanox Spectrum Ethernet switches — powered by the industry’s highest-performing ASICs — along with NVIDIA Cumulus Linux, the most advanced open network operating system.

At 25Gb/s data rates, these switches provide cloud-optimized delivery of data at line-rate. Using a fully shared buffer, they support fair bandwidth allocation and provide predictably low latency, as well as traffic flow prioritization and optimization technology to deliver data without delays, while the hot-swappable redundant power supplies and fans help provide resiliency for business-sensitive traffic.

Modern networks require advanced offload capabilities, including remote direct memory access (RDMA), TCP, overlay networks (for example, VXLAN and Geneve) and software-defined storage acceleration. Implementing these at the network layer frees expensive CPU cycles for user applications while improving the user experience.

To handle the high-speed communications demands of Azure Stack Hub, Lenovo configured compute nodes with a dual-port 10/25/100GbE NVIDIA Mellanox ConnectX-4 Lx, ConnectX-5 or ConnectX-6 Dx NICs. The ConnectX NICs are designed to address cloud, virtualized infrastructure, security and network storage challenges. They use native hardware support for RoCE, offer stateless TCP offloads, accelerate overlay networks and support NVIDIA GPUDirect technology to maximize performance of AI and machine learning workloads. All of this results in much needed higher infrastructure efficiency.

RoCE for Improved Efficiency 

Microsoft Azure Stack Hub leverages Storage Spaces Direct (S2D) and Microsoft’s Server Message Block Direct 3.0. SMB Direct uses high-speed RoCE to transfer large amounts of data with little CPU intervention. SMB Multichannel allows servers to simultaneously use multiple network connections and provide fault tolerance through the automatic discovery of network paths.

The addition of these two features allows NVIDIA RoCE-enabled ConnectX Ethernet NICs to deliver line-rate performance and optimize data transfer between server and storage over standard Ethernet. Customers with Lenovo ThinkAgile SX servers or the Lenovo ThinkAgile SX Azure Hub can deploy storage on secure file servers while delivering the highest performance. As a result, S2D is extremely fast with disaggregated file server performance, almost equaling that of locally attached storage.

microsoft-storage
Testing performed by Microsoft shows NVIDIA Networking RoCE offloads improve S2D performance and CPU efficiency.

Run More Workloads

By using intelligent hardware accelerators and offloads, the NVIDIA RoCE-enabled NICs offload I/O tasks from the CPU, freeing up resources to accelerate application performance instead of making data wait for the attention of a busy CPU.

The result is lower latencies and an improvement in CPU efficiencies. This maximizes the performance in Microsoft Azure Stack deployments by leaving the CPU available to run other application processes. Efficiency gets a boost since users can host more VMs per physical server, support more VDI instances and complete SQL Server queries more quickly.

network offload chart
Using the offloads in NVIDIA ConnectX NICs frees up CPU cores to support more users and more applications, improving server efficiency.

A Transformative Experience with a ThinkAgile Advantage

Lenovo ThinkAgile solutions include a comprehensive portfolio of software and services that supports the full lifecycle of infrastructure. At every stage — planning, deploying, supporting, optimizing and end-of-life — Lenovo provides the expertise and services needed to get the most from technology investments.

This includes single-point-of-contact support for all the hardware and software used in the solution, including Microsoft’s Azure Stack Hub and the ConnectX NICs. Customers never have to worry about who to call — Lenovo takes calls and drives them to resolution.

Learn more about Lenovo ThinkAgile SX for Microsoft Azure Stack Hub with NVIDIA Mellanox networking.

The post On Cloud Mine: Lenovo, Microsoft and NVIDIA Bring Cloud Computing on Premises with Azure Stack Hub appeared first on The Official NVIDIA Blog.

Read More

Turn Your Radio On: NVIDIA Engineer Creates COVID-Safe Choirs in Cars

Turn Your Radio On: NVIDIA Engineer Creates COVID-Safe Choirs in Cars

Music and engineering were completely separate parts of Bryce Denney’s life until the pandemic hit.

By day, the Massachusetts-based NVIDIA engineer helped test processors. On nights and weekends, he played piano chamber music and accompanied theater troupes that his wife, Kathryn, sang in or led.

It was a good balance for someone who graduated with a dual major in physics and piano performance.

Once COVID-19 arrived, “we had to take the calendar off the wall — it was too depressing to look at everything that was canceled,” Bryce said.

“I had this aimless sense of not feeling sure who I was anymore,” said Kathryn, a former public school teacher who plays French horn and conducts high school and community theater groups. “This time last year I was working in five shows and a choir that was preparing for a tour of Spain,” she said.

That’s when Bryce got an idea for some musical engineering.

Getting Wired for Sound

He wanted to help musicians in separate spaces hear each other without nagging delays. As a proof of concept, he ran cables for mics and headphones from a downstairs piano to an upstairs bedroom where Kathryn played her horn.

The duet’s success led to convening a quartet from Kathryn’s choir in the driveway, singing safely distanced in separate cars with wired headsets linked to a small mixer. The Driveway Choir was born.

Driveway choir singers in Massachusetts.
Driveway choir singers harmonize over an FM radio connection.

“We could hear each other breathe and we joked back and forth,” said Kathryn.

“It was like an actual rehearsal again and so much more rewarding” than Zoom events or virtual choirs where members recorded one part at a time and mixed them together, said Bryce.

But it would take a rat’s nest of wires to link a full choir of 20 singers, so Bryce boned up on wireless audio engineering.

Physics to the Rescue

He reached out to people like David Newman, a voice teacher at James Madison University, who was also experimenting with choirs in cars. And he got tips about wireless mics that are inexpensive and easy to clean.

Newman and others coached him on broadcasting over FM frequencies, and how to choose bands to avoid interference from local TV stations.

“It was an excuse to get into physics again,” said Bryce.

Within a few weeks he assembled a system and created a site for his driveway choir, where he posted videos of events, a spreadsheet of options for configuring a system, and packing lists for how to take it on the road. A basic setup for 16 singers costs $1,500 and can scale up to accommodate 24 voices.

“Our goal is to make this accessible to other groups, so we choose less-expensive equipment and write out a step-by-step process,” said Kathryn, who has helped organize 15 events using the gear.

Jan Helbers, a neighbor with wireless expertise, chipped in by designing an antenna distribution system that can sit on top of a car on rainy days.

Bryce posted instructions on how to build it for about $300 complete with a bill of materials and pictures of the best RF connectors to use. A commercial antenna distribution system of this size would cost thousands.

“I was excited about that because here in Marlborough it will be snowy soon and we want to keep singing,” said Bryce.

From Alaska to the Times

The Denneys helped the choir at St. Anne’s Episcopal church in nearby Lincoln, Mass., have its first live rehearsal in four months and record a track used in a Sunday service. Now the choir is putting together its own system.

The church is one of at least 10 groups that have contacted the Denneys about creating driveway choirs of their own, including one in Alaska. They expect more calls after a New York Times reporter joined one of their recent events and wrote a story about his experience.

There’s no shortage of ideas for what’s next. Driveway choirs for nursing homes, singalongs in big mall parking lots or drive-in theaters, Christmas caroling for neighbors.

“I wouldn’t be surprised if we did a Messiah sing,” said Kathryn, who has been using some of her shelter-in-place time to start writing a musical called Connected.

“I think about that song, ‘How Can I Keep from Singing,’ that’s the story of our lives,” she said.

Driveway choir gear
Basic gear for a driveway choir.

At top: Kathryn conducts and Bryce plays piano at a Driveway Choir event with 24 singers in Concord, MA.

The post Turn Your Radio On: NVIDIA Engineer Creates COVID-Safe Choirs in Cars appeared first on The Official NVIDIA Blog.

Read More

AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence

AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence

The upcoming election isn’t the only thing on lawmakers’ minds. Several congressional representatives have been grappling with U.S. AI policy for years, and their work is getting closer to being made into law.

The issue is one of America’s greatest challenges and opportunities: What should the U.S. do to harness AI for the public good, to benefit citizens and companies, and to extend the nation’s prosperity?

At the GPU Technology Conference this week, a bipartisan panel of key members of Congress on AI joined Axios reporter Erica Pandey for our AI for America panel to explore their strategies. Representatives Robin Kelly of Illinois, Will Hurd of Texas and Jerry McNerney of California discussed the immense opportunities of AI, as well as challenges they see as policymakers.

The representatives’ varied backgrounds gave each a unique perspective. McNerney, who holds a Ph.D. in mathematics, considers AI from the standpoint of science and technology. Hurd was a CIA agent and views it through the lens of national security. Kelly is concerned about the impact of AI on the community, jobs and income.

All agreed that the federal government, private sector and academia must work together to ensure that the United States continues to lead in AI. They also agree that AI offers enormous benefits for American companies and citizens.

McNerny summed it up by saying: “AI will affect every part of American life.”

Educate the Public, Educate Ourselves

Each legislator recognized how AI will be a boon for everything from sustainable agriculture to improving the delivery of citizen services. But these will only become reality with support from the public and elected officials.

Kelly emphasized the importance of education — to overcome fear and give workers new skills. Noting that she didn’t have a technical background, she said she considers the challenge from a different perspective than developers.

“We have to educate people and we have to educate ourselves,” she said. “Each community will be affected differently by AI. Education will allay a lot of fears.”

All three agreed that the U.S. federal government, academia and the private sector must collaborate to create this cultural shift. “We need a massive investment in AI education,” said McNerney, who detailed some of the work done at the University of the Pacific to create AI curricula.

Hurd urged Congress to reevaluate and update existing educational programs, making it more flexible to develop programming and data science skills instead of focusing on a full degree. He said we need to “open up federal data for everyone to utilize and take advantage.”

The panel raised other important needs, such as bringing computer science classes into high schools across the country and training federal workers to build AI into their project planning.

Roadmap to a More Efficient Future

Many Americans may not be aware that AI is already a part of their daily lives. The representatives offered some examples, including how AI is being used to maximize crop yields by crunching data on soil characteristics, weather and water consumption.

Hurd and Kelly have been focused on AI policy for several years. Working with the Bipartisan Policy Center, they developed the National Strategy on AI, a policy framework that lays out a strategy for the U.S. to accelerate AI R&D and adoption.

They introduced a resolution, backed by a year of work and four white papers, that calls for investments to make GPUs and other computing resources available, strengthening international cooperation, increasing funding for R&D, building out workforce training programs, and developing AI in an ethical way that reduces bias and protects privacy.

Ned Finkle, vice president of government affairs at NVIDIA, voiced support for the resolution, noting that the requirements for AI are steep.

“AI requires staggering amounts of data, specialized training and massive computational resources,” he said. “With this resolution, Representatives Hurd and Kelly are presenting a solid framework for urgently needed investments in computing power, workforce training, AI curriculum development and data resources.”

McNerney is also working to spur AI development and adoption. His AI in Government Act, which would direct federal agencies to develop plans to adopt AI and evaluate resources available to academia, has passed the House of Representatives and is pending with the Senate.

As their legislation moves forward, the representatives encourage industry leaders to provide input and support their efforts. They urged those interested to visit their websites and reach out.

The post AI for America: US Lawmakers Encourage ‘Massive Investment’ in Artificial Intelligence appeared first on The Official NVIDIA Blog.

Read More

Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks

Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks

In a conversation that ranged from the automation of software to holodeck-style working environments, NVIDIA CEO and founder Jensen Huang explained why now is the moment to back a new generation of startups as part of this week’s GPU Technology Conference.

Jeff Herbst, vice president of business development and head of the NVIDIA Inception startup accelerator program, moderated the panel, which included CrowdAI CEO Devaki Raj and Babble Labs CEO Chris Rowen.

“AI is going to create a new set of opportunities, because all of a sudden software that wasn’t writable in the past, or we didn’t know how to write in the past, we now have the ability to write,” Huang said.

The conversation comes after Huang revealed on Monday that NVIDIA Inception, which nurtures startups revolutionizing industries with AI and data science, had grown to include more than 6,500 companies.

Another change, Huang envisioned workplaces transformed by automation, thanks to AI and robots of all kinds. When asked, by Rowen, about the future of NVIDIA’s own campus, Huang said NVIDIA’s building a real-life holodeck.

One day, these will allow employees from all over the world to work together. “People at home will be in VR, while people at the office will be on the holodeck,” Huang said.

Huang said he sees NVIDIA first building one. “Then I would like to imagine our facilities having 10 to 20 of these new holodecks,” he said.

More broadly, AI, Huang explained, will allow organizations of all kinds to turn their data, and their knowledge base, into powerful AI. NVIDIA will play a role as an enabler, giving companies the tools to transition to a new kind of computing.

He described AI as the “automation of automation” and “software writing software.” This gives the vast majority of the world’s population who aren’t coders new capabilities. “In a lot of ways, AI is the best way to democratize computer science,” Huang said.

For more from Huang, Herbst, Raj and Rowan, register for GTC and watch a replay of the conversation. The talk will be available for viewing by the general public in 30 days.

The post Now’s the Time: NVIDIA CEO Speaks Out on Startups, Holodecks appeared first on The Official NVIDIA Blog.

Read More

Why Retailers Are Investing in Edge Computing and AI

Why Retailers Are Investing in Edge Computing and AI

AI is a retailer’s automated helper, acting as a smart assistant to suggest the perfect position for products in stores, accurately predict consumer demand, automate order fulfillment in warehouses, and much more.

The technology can help retailers grow their top line, potentially improving net profit margins from 2 percent to 6 percent — and adding $1 trillion in profits to the industry globally — according to McKinsey Global Institute analysis.

It can also help them hold on to more of what they already have by reducing shrinkage — the loss of inventory due to theft, shoplifting, ticket switching at self-checkout lanes, etc. — which costs retailers $62 billion annually, according to the National Retail Federation.

For retailers, the ability to deploy, manage and scale AI across their entire distributed edge infrastructure using a single, unified platform is critical. Managing these many devices is no small feat for IT teams as the process can be time-consuming, expensive and complex.

NVIDIA is working with retailers, software providers and startups to create an ecosystem of AI applications for retail, such as intelligent stores, forecasting, conversational AI and recommendation systems, that help retailers pull real-time insights from their data to provide a better shopping experience for their customers.

Smart Retail Managed Remotely at the Edge

The NVIDIA EGX edge AI platform makes it easy to deploy and continuously update AI applications in distributed stores and warehouses. It combines GPU-accelerated edge servers, a cloud-native software stack and containerized applications in NVIDIA NGC, a software catalog that offers a range of industry-specific AI toolkits and pre-trained models.

To provide customers a unified control plane through which to manage their AI infrastructure, NVIDIA announced this week during the GPU Technology Conference a new hybrid-cloud platform called NVIDIA Fleet Command.

Fleet Command centralizes the management of servers spread across vast areas. It offers one-touch provisioning, over-the-air software updates, remote management and detailed monitoring dashboards to make it easier for operational teams to reduce the burden of IT to get the most out of their AI applications. Early access to Fleet Command is open now.

KION Group Pursues One-Touch Intelligent Warehouse Deployment

KION Group, a global supply chain solutions provider, is looking to use Fleet Command to securely deploy and update their applications through a unified control plane, from anywhere, at any time. They are using the NVIDIA EGX AI platform to develop AI applications for its intelligent warehouse systems, increasing the throughput and efficiency in its more than 6,000 retail distribution centers.

The following demo shows how Fleet Command helps KION Group simplify the deployment and management of AI at the edge — from material handling to autonomous forklifts to pick-and-place robotics.

Everseen Scales Asset Protection & Perpetual Inventory Accuracy with Edge AI

Everseen’s AI platform, deployed in many retail stores and distribution centers, uses advanced machine learning, computer vision and deep learning to bring real-time insights to retailers for asset protection and to streamline distribution system processes.

The platform is optimized on NVIDIA T4 Tensor Core GPUs using NVIDIA TensorRT software, resulting in 10x higher inference compute at the edge. This enables Everseen’s customers to reduce errors and shrinkage in real time for faster customer checkout and to optimize operations in distribution centers.

Everseen is using the EGX platform and Fleet Command to simplify and scale their deployment and to update their AI applications on servers across hundreds of retail stores and distribution centers. As AI algorithms retrain and improve accuracy with new metadata, updated applications can be securely updated and deployed over the air on hundreds of servers.

Deep North Delivers Transformative Insights with In-Store Analytics

Retailers use Deep North’s AI platform to digitize their shopping locations, analyze anonymous shopper behavior inside stores and conduct visual analytics. The platform gives retailers the ability to predict and adapt to consumer behavior in their commercial spaces and optimize store layout and staffing in high-traffic aisles

The company uses NVIDIA EGX to simplify AI deployment, server management and device orchestration. With EGX, AI computations are performed at the edge entirely in stores, delivering real-time notifications to store associates for better inventory management and optimized staffing.

By optimizing its intelligent video analytics applications on NVIDIA T4 GPUs with the NVIDIA Metropolis application framework, Deep North has seen orders-of-magnitude improvement in edge compute performance while delivering real-time insights to customers.

Growing AI Opportunities for Retailers

The NVIDIA EGX platform and Fleet Command deliver accelerated, secure AI computing to the edge for retailers today. And a growing number of them are applying GPU computing, AI, robotics and simulation technologies to reinvent their operations for maximum agility and profitability.

To learn more, check out my session on “Driving Agility in Retail with AI” at GTC. Explore how NVIDIA is leveraging AI in retail through GPU-accelerated containers, deep learning frameworks, software libraries and SDKs. And watch how NVIDIA AI is transforming everyday retail experiences:

Also watch NVIDIA CEO Jensen Huang recap all the news at GTC: 

The post Why Retailers Are Investing in Edge Computing and AI appeared first on The Official NVIDIA Blog.

Read More

From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry

From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry

There are major shifts happening in the media and entertainment industry.

With the rise of streaming services, there’s a growing demand for high-quality programming  and an increasing need for fresh content to satisfy hundreds of millions of subscribers.

At the same time, teams are often collaborating on complex assets using multiple applications while working from different geographic locations. New pipelines are emerging and post-production workflows are being integrated earlier into processes, boosting the need for real-time collaboration.

By extending our Omniverse 3D simulation and collaboration platform to run on the NVIDIA EGX AI platform, NVIDIA is making it even easier for artists, designers, technologists and other creative professionals to accelerate workflows for productions — from asset creation to live on-set collaboration.

The EGX platform leverages the power of NVIDIA RTX GPUs, NVIDIA Virtual Data Center Workstation software, and NVIDIA Omniverse to fundamentally transform the collaborative process during digital content creation and virtual production.

Professionals and studios around the world can use this combination to lower costs, boost creativity across applications and teams, and accelerate production workflows.

Driving Real-Time Collaboration, Increased Interactivity

The NVIDIA EGX platform delivers the power of the NVIDIA Ampere architecture on a range of validated servers and devices. A vast ecosystem of partners offer EGX through their products and services. Professional creatives can use these to achieve the most significant advancements in computer graphics to accelerate their film and television content creation pipelines.

To support third-party digital content creation applications, Omniverse Connect libraries are distributed as plugins that enable client applications to connect to Omniverse Nucleus and to publish and subscribe to individual assets and full worlds. Supported applications for common film and TV content creation pipelines include Epic Games Unreal Engine, Autodesk Maya, Autodesk 3ds Max, SideFX Houdini, Adobe Photoshop, Substance Painter by Adobe, and Unity.

NVIDIA Virtual Workstation software provides the most powerful virtual workstations from the data center or cloud to any device, anywhere. IT departments can virtualize any application from the data center with a native workstation user experience, while eliminating constrained workflows and flexibly scaling GPU resources.

Studios can optimize their infrastructure by efficiently centralizing applications and data. This dramatically reduces IT operating expenses and allows companies to focus IT resources on managing strategic projects instead of individual workstations — all while enabling a more flexible, remote real-time environment with stronger data security.

With NVIDIA Omniverse, creative teams have the ability to deliver real-time results by creating, iterating and collaborating on the same assets while using a variety of applications. Omniverse powered by the EGX platform and NVIDIA Virtual Workstation allows artists to focus on creating high-quality content without waiting for long render times.

“Real-time ray tracing massive datasets in a remote workstation environment is finally possible with the new RTX A6000, HP ZCentral and NVIDIA’s Omniverse,” said Chris Eckardt, creative director and CG supervisor at Framestore.

Elevating Content Creation Across the World

During content creation, artists need to design and iterate quickly on assets, while collaborating with remote teams and other studios working on the same productions. With Omniverse running on the NVIDIA EGX platform, users can access the power of a high-end virtual workstation to rapidly create, iterate and present compelling renders using their preferred application.

Creative professionals can quickly combine terrain from one shot with characters from another without removing any data, which drives more efficient collaboration. Teams can communicate their designs more effectively by sharing high-fidelity ray-traced models with one click, so colleagues or clients can view the assets on a phone, tablet or in a browser. Along with the ability to mark up models in Omniverse, this accelerates the decision-making process and reduces design review cycles to help keep projects on track.

Taking Virtual Productions to the Next Level

With more film and TV projects using new virtual production techniques, studios are under immense pressure to iterate as quickly as possible to keep the cameras rolling. With in-camera VFX, the concept of fixing it in post-production has moved to fixing it all on set.

With the NVIDIA EGX platform and NVIDIA Virtual Workstations running Omniverse, users gain access to secure, up-to-date datasets from any device, ensuring they maintain productivity when working live on set.

Artists achieve a smooth experience with Unreal Engine, Maya, Substance Painter and other apps to quickly create and iterate on scene files while the interoperability of these software tools in Omniverse improves collaboration. Teams can instantly view photorealistic renderings of their model with the RTX Renderer so that they rapidly assess options for the most compelling images.

Learn more at https://developer.nvidia.com/nvidia-omniverse-platform.

It’s not too late to get access to hundreds of live and on-demand talks at GTC. Register now through Oct. 9 using promo code CMB4KN to get 20 percent off.

The post From Content Creation to Collaboration, NVIDIA Omniverse Transforms Entertainment Industry appeared first on The Official NVIDIA Blog.

Read More

AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC

AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC

Ajit Pai recalls a cold February day, standing in a field at the Wind River reservation in central Wyoming with Arapaho Indian leaders, hearing how they used a Connecting America grant to link schools and homes to gigabit fiber Internet.

It was one of many technology transformations the chairman of the U.S. Federal Communications Commission witnessed in visits to 49 states.

“Those trips redouble my motivation to do everything we can to close the digital divide because I want to make sure every American can participate in the digital economy,” said Pai in an online talk at NVIDIA’s GTC event.

Technologies like 5G and AI promise to keep that economy vibrant across factories, hospitals, warehouses and farm fields.

“I visited a corn farmer in Idaho who wants his combine to upload data to the cloud as it goes through the field to determine what water and pesticide to apply … AI will be transformative,” Pai said.

“AI is definitely the next industrial revolution, and America can help lead it,” said Soma Velayutham, NVIDIA’s general manager for AI in telecoms and 5G and host of the online talk with Pai.

AI a Fundamental Part of 5G

Shining a light on machine learning and 5G, the FCC has hosted forums on AI and open radio-access networks that included participants from AT&T, Dell, IBM, Hewlett Packard Enterprise, Nokia, NVIDIA, Oracle, Qualcomm and Verizon.

“It was striking to see how many people think AI will be a fundamental part of 5G, making it a much smarter network with optimizations using powerful AI algorithms to look at spectrum allocations, consumer use cases and how networks can address them,” Pai said.

For example, devices can use machine learning to avoid interference and optimize use of unlicensed spectrum the FCC is opening up for Wi-Fi at 6 GHz. “Someone could hire a million people to work that out, but it’s much more powerful to use AI,” he said.

“AI is really good at resource optimization,” said Velayutham. “AI can efficiently manage 5G network resources, optimizing the way we use and monetize spectrum,” he added.

AI Saves Spectrum, 5G Delivers Cool Services

Telecom researchers in Asia, Europe and the U.S. are using NVIDIA technologies to build software-defined radio access networks that can modulate more services into less spectrum, enabling new graphics and AI services.

In the U.K. telecom provider BT is working with an NVIDIA partner on edge computing applications such as streaming over 5G coverage of sporting events with CloudXR, a mix of virtual and augmented reality.

In closing, Pai addressed developers in the GTC audience, thanking them and “all the innovators for doing this work. You have a friend at the FCC who recognizes your innovation and wants to be a partner with it,” he said.

To hear more about how AI will transform industries at the edge of the network, watch a portion of the GTC keynote below by NVIDIA’s  CEO, Jensen Huang.

The post AI, 5G Will Energize U.S. Economy, Says FCC Chair at GTC appeared first on The Official NVIDIA Blog.

Read More

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020

Pindar Van Arman is a veritable triple threat — he can paint, he can program and he can program robots that paint.

Van Arman first started incorporating robots into his artistic method 15 years ago to save time. He coded a robot to paint the beginning stages of an art piece — like ”a printer that can pick up a brush” — to save time.

It wasn’t until Van Arman took part in the DARPA Grand Challenge, a prize competition for autonomous vehicles, that he was inspired to bring AI into his art.

Now, his robots are capable of creating artwork all on their own through the use of deep neural networks and feedback loops. Van Arman is never far away, though, sometimes pausing a robot to adjust its code and provide it some artistic guidance.

Van Arman’s work is on display in the AI Art Gallery at GTC 2020, and he’ll be giving conference attendees a virtual tour of his studio on Oct. 8 at 11 a.m. Pacific time.

Key Points From This Episode:

  • One of Van Arman’s most recent projects is artonomous, an artificially intelligent painting robot that is learning the subtleties of fine art. Anyone can submit their photo to be included in artonomous’ training set.
  • Van Arman predicts that AI will become even more creative, independent of its human creator. He predicts that AI artists will learn to program a variety of coexisting networks that give AI a greater understanding of what defines art.

Tweetables:

“I’m trying to understand myself better by exploring my own creativity — by trying to capture it in code, breaking it down and distilling it” — Pindar Van Arman [4:22]

“I’d say 50% of the paintings are completely autonomous, and 50% of the paintings are directed by me. 100% of them, though, are my art” — Pindar Van Arman [17:20]

You Might Also Like

How Tattoodo Uses AI to Help You Find Your Next Tattoo

Picture this, you find yourself in a tattoo parlor. But none of the dragons, flaming skulls, or gothic font lifestyle mottos you see on the wall seem like something you want on your body. So what do you do? You turn to AI, of course. We spoke to two members of the development team at Tattoodo.com, who created an app that uses deep learning to help you create the tattoo of your dreams.

UC Berkeley’s Pieter Abbeel on How Deep Learning Will Help Robots Learn

Robots can do amazing things. Compare even the most advanced robots to a three year old, however, and they can come up short. UC Berkeley Professor Pieter Abbeel has pioneered the idea that deep learning could be the key to bridging that gap: creating robots that can learn how to move through the world more fluidly and naturally.

How AI’s Storming the Fashion Industry

Costa Colbert — who holds degrees ranging from neural science to electrical engineering — is working at MAD Street Den to bring machine learning to fashion. He’ll explain how his team is using generative adversarial networks to create images of models wearing clothes.

Tune in to the AI Podcast

Get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn. If your favorite isn’t listed here, drop us a note.

Tune in to the Apple Podcast Tune in to the Google Podcast Tune in to the Spotify Podcast

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post AI Artist Pindar Van Arman’s Painting Robots Visit GTC 2020 appeared first on The Official NVIDIA Blog.

Read More

Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries

Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries

Hate hunting and pecking away at your keyboard every time you have a quick question? You’ll love this.

Microsoft’s Bing search engine has turned to Turing-NLG and NVIDIA GPUs to suggest full sentences for you as you type.

Turing-NLG is a cutting-edge, large-scale unsupervised language model that has achieved strong performance on language modeling benchmarks.

It’s just the latest example of an AI technique called unsupervised learning, which makes sense of vast quantities of data by extracting features and patterns without the need for humans to provide any pre-labeled data.

Microsoft calls this Next Phrase Prediction, and it can feel like magic, making full-phrase suggestions in real time for long search queries.

Turing-NLG is among several innovations — from model compression to state caching and hardware acceleration — that Bing has harnessed with Next Phrase Prediction.

Over the summer, Microsoft worked with engineers at NVIDIA to optimize Turing-NLG to their needs, accelerating the model on NVIDIA GPUs to power the feature for users worldwide.

A key part of this optimization was to run this massive AI model extremely fast to power real-time search experience. With a combination of hardware and model optimization Microsoft and NVIDIA achieved an average latency below 10 milliseconds.

By contrast, it takes more than 100 milliseconds to blink your eye.

Learn more about the next wave of AI innovations at Bing.

Before the introduction of Next Phrase Prediction, the approach for handling query suggestions for longer queries was limited to completing the current word being typed by the user.

Now type in “The best way to replace,” and you’ll immediately see three suggestions for completing the phrase: wood, plastic and metal. Type in “how can I replace a battery for,” and you’ll see “iphone, samsung, ipad and kindle” all suggested.

With Next Phrase Prediction, Bing can now present users with full-phrase suggestions.

The more characters you type, the closer Bing gets to what you probably want to ask.

And because these suggestions are generated instantly, they’re not limited to previously seen data or just the current word being typed.

So, for some queries, Bing won’t just save you a few keystrokes — but multiple words.

As a result of this work, the coverage of autosuggestion completions increases considerably, Microsoft reports, improving the overall user experience “significantly.”

The post Bada Bing Bada Boom: Microsoft Turns to Turing-NLG, NVIDIA GPUs to Instantly Suggest Full-Phrase Queries appeared first on The Official NVIDIA Blog.

Read More

Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse

Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse

For researchers like Max Zimmerman, it was a welcome pile-on to tackle a global pandemic.

A million citizen scientists donated time on their home systems so the Folding@home consortium could calculate the intricate movements of proteins inside the coronavirus. Then a team of NVIDIA simulation experts combined the best tools from multiple industries to let the researchers see their data in a whole new way.

“I’ve been repeatedly amazed with the unprecedented scale of scientific collaborations,” said Zimmerman, a postdoc fellow at the Washington University School of Medicine in St. Louis, which hosts one of eight labs that keep the Folding@home research network humming.

As a result, Zimmerman and colleagues published a paper on BioRxiv, showing images of 17 weak spots in coronavirus proteins that antiviral drug makers can attack. And the high-res simulation of the work continues to educate researchers and the public alike about the bad actor behind the pandemic.

“We are in a position to make serious headway towards understanding the molecular foundations of health and disease,” he added.

An Antiviral Effort Goes Viral

In mid-March, the Folding@home team put many long-running projects on hold to focus on studying key proteins behind COVID. They issued a call for help, and by the end of the month the network swelled to become the world’s first exascale supercomputer, fueled in part by more than 280,000 NVIDIA GPUs.

Researchers harnessed that power to search for vulnerable moments in the rapid and intricate dance of the folding proteins, split-second openings drug makers could exploit. Within three months, computers found many promising motions that traditional experiments could not see.

“We’ve simulated nearly the entire proteome of the virus and discovered more than 50 new and novel targets to aid in the design of antivirals. We have also been simulating drug candidates in known targets, screening over 50,000 compounds to identify 300 drug candidates,” Zimmerman said.

The coronavirus uses cunning techniques to avoid human immune responses, like the Spike protein keeping its head down in a closed position. With the power of an exaflop at their disposal, researchers simulated the proteins folding for a full tenth of a second, orders of magnitude longer than prior work.

Though the time sampled was relatively short, the dataset to enable it was vast.

The SARS-CoV-2 spike protein alone consists of 442,881 atoms in constant motion. In just 1.2 microseconds, it generates about 300 billion timestamps, freeze frames that researchers must track.

Combined with the two dozen other coronavirus proteins they studied, Folding@home amassed the largest collection of molecular simulations in history.

Omniverse Simulates a Coronavirus Close Up

The dataset “ended up on my desk when someone asked what we could do with it using more than the typical scientific tools to really make it shine,” said Peter Messmer, who leads a scientific visualization team at NVIDIA.

Using Visual Molecular Dynamics, a standard tool for scientists, he pulled the data into NVIDIA Omniverse, a platform built for collaborative 3D graphics and simulation soon to be in open beta. Then the magic happened.

The team connected Autodesk’s Maya animation software to Omniverse to visualize a camera path, creating a fly-through of the proteins’ geometric intricacies. The platform’s core technologies such as NVIDIA Material Definition Language (MDL) let the team give tangible surface properties to molecules, creating translucent or glowing regions to help viewers see critical features more clearly.

With Omniverse, “researchers are not confined to scientific visualization tools, they can use the same tools the best artists and movie makers use to deliver a cinematic rendering — we’re bringing these two worlds together,” Messmer said.

Simulation Experts Share Their Story Live

The result was a visually stunning presentation where each spike on a coronavirus protein is represented with more than 1.8 million triangles, rendered by a bank of NVIDIA RTX GPUs.

Zimmerman and Messmer will co-host a live Q&A technical session Oct. 8 at 11 AM PDT to discuss how they developed the simulation that packs nearly 150 million triangles to represent a millisecond in a protein’s life.

The work validates the mission behind Omniverse to create a universal virtual environment that spans industries and disciplines. We’re especially proud to see the platform serve science in the fight against the pandemic.

The experience made Zimmerman “incredibly optimistic about the future of science. NVIDIA GPUs have been instrumental in generating our datasets, and now those GPUs running Omniverse are helping us see our work in a new and vivid way,” he said.

Visit NVIDIA’s COVID-19 Research Hub to learn more about how AI and GPU-accelerated technology continues to fight the pandemic. And watch NVIDIA CEO Jensen Huang describe in a portion of his GTC keynote below how Omniverse is playing a role.

The post Coronavirus Gets a Close-Up: Folding@home Live in NVIDIA Omniverse appeared first on The Official NVIDIA Blog.

Read More