Making an Impact: GFN Thursday Transforms Macs Into GeForce Gaming PCs

Thanks to the GeForce cloud, even Mac users can be PC gamers. This GFN Thursday, fire up your Macbook and get your game on.

This week brings eight more games to the GeForce NOW library. Plus, members can play Genshin Impact and claim a reward to start them out on their journeys streaming on GeForce NOW.

Mac User by Day, Gamer by Night

Love using a Mac, but can’t play the PC-only game that everyone’s talking about — like Genshin Impact or this week’s Epic Games Store free game, Car Mechanic Simulator 2018? GeForce NOW transforms nearly any Mac into a high-end gaming rig, rendering games at full quality and streaming them to Macbook Pros, Macbook Airs, iMacs and Mac Minis.

On GeForce NOW, you play the real PC versions of games without having to worry if something has been ported to Mac. Since the native PC version of games streams straight from the cloud, gamers can upgrade to the newest Apple hardware with confidence.

GeForce NOW RTX 3080 members can play on M1 Mac laptops at up to 1600p, or up to 4K resolution on supported external displays. Stream with even longer sessions lengths — up to eight hours. Members on RTX 3080 and Priority plans can even play with RTX ON for supported games, experiencing modern classics like Cyberpunk 2077 and Control with real-time ray tracing. No PC required.

Game saves are synced across each digital store for supported games, so members can play on Macs, as well as any other supported device, without losing progress.

Join today to see what it’s like to have the best of both PC and Mac worlds.

Get Started With Genshin Impact

This week brings the release of Genshin Impact, as well as rewards for Travelers playing on GeForce NOW.

Embark on a journey as a Traveler from another world and search for a missing sibling in the fantastic continent of Teyvat. Explore immersive landscapes, dive deep into rich quests alongside iconic characters and complete daily challenges, streaming across supported PCs, Macs and Chromebooks.

RTX 3080 members can even play with ultra-low latency, streaming at 1440p and 120 frames per second or in 4K resolution at 60 FPS on the PC and Mac apps.

Genshin Impact Reward on GeForce NOW
Start the adventure off right with rewards in “Genshin Impact.”

Members who’ve opted in to rewards will receive an email for a starter kit that can be claimed through the NVIDIA Rewards redemption portal. The kit will become available in game once players reach Adventure Rank 10.

The reward includes 10,000 Mora to purchase various items, five Fine Enhancement Ores to enhance weapons, three Squirrel Fish and three Northern Apple Stews for fuel, and 10 Adventurer’s Experience points to level up characters.

Getting membership rewards for streaming games on the cloud is easy. Log in to your NVIDIA account and select “GEFORCE NOW” from the header, then scroll down to “REWARDS” and click the “UPDATE REWARDS SETTINGS” button. Check the box in the dialogue window that shows up to start receiving special offers and in-game goodies.

Jump Into the Newest Games

Planet Zoo on GeForce NOW
Get a little wild this week with new endangered animals to care for and more in the Planet Zoo: Conservation Pack.

There’s something for everyone on GeForce NOW. This week brings new in-game content like the Planet Zoo: Conservation Pack, the newest DLC for Frontier Developments’ ultimate zoo sim.

Members can also stream the following eight new titles this week:

Finally, we’ve got a little challenge for you this week. Let us know your answer on Twitter or in the comments below.

The post Making an Impact: GFN Thursday Transforms Macs Into GeForce Gaming PCs appeared first on NVIDIA Blog.

Read More

Meet the Omnivore: Director of Photography Revs Up NVIDIA Omniverse to Create Sleek Car Demo

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to accelerate their 3D workflows and create virtual worlds.

A camera begins in the sky, flies through some trees and smoothly exits the forest, all while precisely tracking a car driving down a dirt path. This would be all but impossible in the real world, according to film and photography director Brett Danton.

But Danton made what he calls this “impossible camera move” possible for an automotive commercial — at home, with cinematic quality and physical accuracy.

He pulled off the feat using NVIDIA Omniverse, a 3D design collaboration and world simulation platform that enhanced his typical creative workflow and connected various apps he uses, including Autodesk Maya, Epic Games Unreal Engine and Omniverse Create.

With 30+ years of experience in the digital imagery industry, U.K.-based Danton creates advertisements for international clients, showcasing products ranging from cosmetics to cars.

His latest projects, like the above using a Volvo car, demonstrate how a physical location can be recreated for a virtual shoot, delivering photorealistic rendered sequences that match cinematic real-world footage.

“This breaks from traditional imagery and shifts the gears of what’s possible in the digital arts, allowing multiple deliverables inside the one asset,” Danton said.

The physically accurate simulation capabilities of Omniverse took Danton’s project the extra mile, animating a photorealistic car that reacts to the dirt road’s uneven surface as it would in real life.

And by working with Universal Scene Description (USD)-based assets from connected digital content creation tools like Autodesk Maya and Unreal Engine in Omniverse, Danton collaborated with other art departments from his home, just outside of London.

“Omniverse gives me an entire studio on my desktop,” Danton said. “It’s impossible to tell the difference between the real location and what’s been created in Omniverse, and I know that because I went and stood in the real location to create the virtual set.”

Real-Time Collaboration for Multi-App Workflows

To create the forest featured in the car commercial, Danton collaborated with award-winning design studio Ars Thanea. The team shot countless 100-megapixel images to use as references, resulting in a point cloud — or set of data points representing 3D shapes in space — that totaled 250 gigabytes.

The team then used Omniverse as the central hub for all of the data exchange, accelerated by NVIDIA RTX GPUs. Autodesk Maya served as the entry point for camera animation and initial lighting before the project’s data was brought into Omniverse with an Omniverse Connector.

And with the Omniverse Create app, the artists placed trees by hand, created tree patches and tweaked them to fit the forest floor. Omniverse-based real-time collaboration was key for enabling high-profile visual effects artists to work together remotely and on site, Danton said.

Omniverse Create uses Pixar’s USD format to accelerate advanced scene composition and assemble, light, simulate and render 3D scenes in real time.

Photorealistic Lighting With Path Tracing

When directing projects in physical production sites and studios, Danton said he was limited in what he could achieve with lighting — depending on resources, time of day and many other factors. Omniverse removes such creative limitations.

“I can now pre-visualize any of the shots I want to take, and on top of that, I can light them in Omniverse in a photorealistic way,” Danton said.

When he moves a light in Omniverse, the scene reacts exactly the way it would in the real world.

This ability, enabled by Omniverse’s RTX-powered real-time ray tracing and path tracing, is Danton’s favorite aspect of the platform. It lets him create photorealistic, cinematic sequences with “true feel of light,” which wasn’t possible before, he said.

In the Volvo car clip above, for example, the Omniverse lighting reacts on the car as it would in the forest, with physically accurate reflections and light bouncing off the windows.

“I’ve tried other software before, and Omniverse is far superior to anything else I have seen because of its real-time rendering and collaborative workflow capabilities,” Danton said.

Join in on the Creation

Creators across the world can experience NVIDIA Omniverse for free, and enterprise teams can use the platform for their projects.

Plus, join the #MadeInMachinima contest, running through June 27, for a chance to win the latest NVIDIA Studio laptop.

Learn more about Omniverse by watching GTC sessions on demand — featuring visionaries from the Omniverse team, Adobe, Autodesk, Epic Games, Pixar, Unity and Walt Disney Studios.

Follow Omniverse on Instagram, Twitter, YouTube and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Meet the Omnivore: Director of Photography Revs Up NVIDIA Omniverse to Create Sleek Car Demo appeared first on NVIDIA Blog.

Read More

Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs

It may seem intuitive that AI and deep learning can speed up workflows — including novel drug discovery, a typically years-long and several-billion-dollar endeavor.

But professors Artem Cherkasov and Olexandr Isayev were surprised to find that no recent academic papers provided a comprehensive, global research review of how deep learning and GPU-accelerated computing impact drug discovery.

In March, they published a paper in Nature to fill this gap, presenting an up-to-date review of the state of the art for GPU-accelerated drug discovery techniques.

Cherkasov, a professor in the department of urologic sciences at the University of British Columbia, and Isayev, an assistant professor of chemistry at Carnegie Mellon University, join NVIDIA AI Podcast host Noah Kravitz this week to discuss how GPUs can help democratize drug discovery.

In addition, the guests cover their inspiration and process for writing the paper, talk about NVIDIA technologies that are transforming the role of AI in drug discovery, and give tips for adopting new approaches to research.

You Might Also Like

Lending a Helping Hand: Jules Anh Tuan Nguyen on Building a Neuroprosthetic

Is it possible to manipulate things with your mind? Possibly. University of Minnesota postdoctoral researcher Jules Anh Tuan Nguyen discusses allowing amputees to control their prosthetic limbs with their thoughts, using neural decoders and deep learning.

AI of the Tiger: Conservation Biologist Jeremy Dertien on Real-Time Poaching Prevention

Fewer than 4,000 tigers remain in the wild due to a combination of poaching, habitat loss and environmental pressures. Clemson University’s Jeremy Dertien discusses using AI-equipped cameras to monitor poaching to protect a majority of the world’s remaining tiger populations.

Wild Things: 3D Reconstructions of Endangered Species with NVIDIA’s Sifei Liu

Studying endangered species can be difficult, as they’re elusive, and the act of observing them can disrupt their lives. Sifei Liu, a senior research scientist at NVIDIA, discusses how scientists can avoid these pitfalls by studying AI-generated 3D representations of these endangered species.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

Also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

Make the AI Podcast better: Have a few minutes to spare? Fill out our listener survey.

 

The post Artem Cherkasov and Olexandr Isayev on Democratizing Drug Discovery With NVIDIA GPUs appeared first on NVIDIA Blog.

Read More

AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects

Jazz is all about improvisation — and NVIDIA is paying tribute to the genre with AI research that could one day enable graphics creators to improvise with 3D objects created in the time it takes to hold a jam session.

The method, NVIDIA 3D MoMa, could empower architects, designers, concept artists and game developers to quickly import an object into a graphics engine to start working with it, modifying scale, changing the material or experimenting with different lighting effects.

NVIDIA Research showcased this technology in a video celebrating jazz and its birthplace, New Orleans, where the paper behind 3D MoMa will be presented this week at the Conference on Computer Vision and Pattern Recognition.

Extracting 3D Objects From 2D Images

Inverse rendering, a technique to reconstruct a series of still photos into a 3D model of an object or scene, “has long been a holy grail unifying computer vision and computer graphics,” said David Luebke, vice president of graphics research at NVIDIA.

“By formulating every piece of the inverse rendering problem as a GPU-accelerated differentiable component, the NVIDIA 3D MoMa rendering pipeline uses the machinery of modern AI and the raw computational horsepower of NVIDIA GPUs to quickly produce 3D objects that creators can import, edit and extend without limitation in existing tools,” he said.

To be most useful for an artist or engineer, a 3D object should be in a form that can be dropped into widely used tools such as game engines, 3D modelers and film renderers. That form is a triangle mesh with textured materials, the common language used by such 3D tools.

trumpet mesh
Triangle meshes are the underlying frames used to define shapes in 3D graphics and modeling.

Game studios and other creators would traditionally create 3D objects like these with complex photogrammetry techniques that require significant time and manual effort. Recent work in neural radiance fields can rapidly generate a 3D representation of an object or scene, but not in a triangle mesh format that can be easily edited.

NVIDIA 3D MoMa generates triangle mesh models within an hour on a single NVIDIA Tensor Core GPU. The pipeline’s output is directly compatible with the 3D graphics engines and modeling tools that creators already use.

The pipeline’s reconstruction includes three features: a 3D mesh model, materials and lighting. The mesh is like a papier-mâché model of a 3D shape built from triangles. With it, developers can modify an object to fit their creative vision. Materials are 2D textures overlaid on the 3D meshes like a skin. And NVIDIA 3D MoMa’s estimate of how the scene is lit allows creators to later modify the lighting on the objects.

Tuning Instruments for Virtual Jazz Band

To showcase the capabilities of NVIDIA 3D MoMa, NVIDIA’s research and creative teams started by collecting around 100 images each of five jazz band instruments — a trumpet, trombone, saxophone, drum set and clarinet — from different angles.

NVIDIA 3D MoMa reconstructed these 2D images into 3D representations of each instrument, represented as meshes. The NVIDIA team then took the instruments out of their original scenes and imported them into the NVIDIA Omniverse 3D simulation platform to edit.

editing the 3D trumpet in NVIDIA Omniverse

In any traditional graphics engine, creators can easily swap out the material of a shape generated by NVIDIA 3D MoMa, as if dressing the mesh in different outfits. The team did this with the trumpet model, for example, instantly converting its original plastic to gold, marble, wood or cork.

Creators can then place the newly edited objects into any virtual scene. The NVIDIA team dropped the instruments into a Cornell box, a classic graphics test for rendering quality. They demonstrated that the virtual instruments react to light just as they would in the physical world, with the shiny brass instruments reflecting brightly, and the matte drum skins absorbing light.

These new objects, generated through inverse rendering, can be used as building blocks for a complex animated scene — showcased in the video’s finale as a virtual jazz band.

The paper behind NVIDIA 3D MoMa will be presented in a session at CVPR on June 22 at 1:30 p.m. Central time. It’s one of 38 papers with NVIDIA authors at the conference. Learn more about NVIDIA Research at CVPR.

The post AI in the Big Easy: NVIDIA Research Lets Content Creators Improvise With 3D Objects appeared first on NVIDIA Blog.

Read More

NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse

The metaverse is the next big step in the evolution of the internet — the 3D web — which presents a major opportunity for every industry from entertainment to automotive to manufacturing, robotics and beyond.

That’s why NVIDIA is joining our partners in the Metaverse Standards Forum, an open venue for all interested parties to discuss and debate how best to build the foundations of the metaverse.

From a 2D to a 3D Internet 

The early internet of the ’70s and ’80s was accessed purely through text-based interfaces, UNIX shells and consoles. The ’90s introduced the World Wide Web, which made the internet accessible to millions by providing a more natural and intuitive interface with images and text combined into 2D worlds in the form of web pages.

The metaverse that is coming into existence is a 3D spatial overlay of the internet. It continues the trend of making the internet more accessible and more natural for humans by making the interface to the internet indistinguishable from our interface to the real world.

The 3D computer graphics and simulation technologies developed over the past three decades in CAD/CAM, visual effects and video games, combined with the computing power now available, have converged to a point where we can now start building such an interface.

A Place for Both Work and Play

For most people, the term metaverse primarily evokes thoughts of gaming or socializing. They’ll definitely be big, important use cases of the metaverse, but just like with the internet, it won’t be limited to them.

We use the internet for far more than play. Companies and industries run on the internet; it’s part of their essential infrastructure. We believe the same will be true for the emerging metaverse.

For example, retailers are opening virtual shops to sell real and virtual goods. Researchers are using digital twins to design and simulate fusion power plants.

BMW Group is developing a digital twin of an entire factory to more rapidly design and operate efficient and safe factories. NVIDIA is building an AI supercomputer to power a digital twin of the Earth to help researchers study and solve climate change.

A Lesson From the Web

The key to the success of the web from the very start in 1993 was the introduction of a standard and open way of describing a web page — HyperText Markup Language, or HTML. Without HTML’s adoption, we would’ve had disconnected islands on the web, each only linking within themselves.

Fortunately, the creators of the early web and internet understood that open standards — particularly for data formats — were accelerators of growth and a network effect.

The metaverse needs an equivalent to HTML to describe interlinked 3D worlds in glorious detail. Moving between 3D worlds using various tools, viewers and browsers must be seamless and consistent.

The solution is Pixar’s Universal Scene Description (USD) — an open and extensible format, library and composition engine.

USD is one of many of the building blocks we’ll need to build the metaverse. Another is glTF, a 3D transmission format developed within Khronos Group. We see USD and glTF as compatible technologies and hope to see them coevolve as such.

A Constellation of Standards

Neil Trevett, vice president of developer ecosystems at NVIDIA and the president of The Khronos Group, the forum’s host, says the metaverse will require a constellation of standards.

The forum won’t set them, but it’ll be a place where designers and users can learn about and try ones they want to use and identify any that are missing or need to be expanded.

We’re thrilled to see the formation of the Metaverse Standards Forum — a free and open venue where people from every domain can gather to contribute to the exciting new era of the internet: the metaverse!

The post NVIDIA Joins Forum to Help Lay the Foundation of the Metaverse appeared first on NVIDIA Blog.

Read More

3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’

Editor’s note: This post is part of our weekly In the NVIDIA Studio series, which celebrates featured artists, offers creative tips and tricks, and demonstrates how NVIDIA Studio technology accelerates creative workflows. 

3D artist Jae Solina, who goes by the stage name JSFILMZ, steps In the NVIDIA Studio this week to share his unique 3D creative workflow in the making of Cyberpunk Short Film — a story shrouded in mystery with a tense exchange between two secretive contacts.

As an avid movie buff, JSFILMZ takes inspiration from innovative movie directors Christopher Nolan, David Fincher and Georce Lucas. He admires their abilities to combine technical skill with storytelling heightened by exciting plot twists.

The Cyberpunk Short Film setting displays stunning realism with ray-traced lighting, shadows and reflections — complemented by rich, vibrant colors.

Astonishingly, JSFILMZ created the film in just one day with the NVIDIA Omniverse platform for 3D design collaboration and world simulation, using the Omniverse Machinima app and the Reallusion iClone Connector. He alternated between systems that use an NVIDIA RTX A6000 GPU and a GeForce RTX 3070 Laptop GPU.

The #MadeinMachinima contest ends soon. Omniverse users can build and animate cinematic short stories with Omniverse Machinima for a chance to win RTX-accelerated NVIDIA Studio laptops. Entries are being accepted until Monday, June 27. 

An Omniverse Odyssey With Machinima 

JSFILMZ’s creative journey starts with scene building in Omniverse Machinima, plugging and moving background objects to create the futuristic cyberpunk diner. His RTX GPUs power Omniverse’s built-in RTX renderer to achieve fast, interactive movement within the viewport while preserving photorealistic detail. The reduction of distracting denoising allows JSFILMZ to focus on creating without having to wait for his scenes to render.

Ray-traced light reflects off the rim of the character’s glasses, achieving impressive photorealism.

Pulling assets from the NVIDIA MDL material library, JSFILMZ achieved peak realism with every surface, material and texture.

 

The artist then populated the scene with human character models downloaded from the Reallusion content store.

Automated facial animation in Reallusion iClone.

Vocal animation was generated in the Reallusion iClone Connector using the AccuLips feature. It simulates human speech behavior with each mouth shape, naturally taking on the qualities of those that precede or follow them. JSFILMZ simply uploads voiceover files from his actors, and the animations are automatically generated.

 

To capture animations while sitting, JSFILMZ turned to an Xsens Awinda starter body-motion-capture suit, acting out movements for both characters. Using the Xsens software, he processed, cleaned up and exported the visual effects data.

 

JSFILMZ integrated unique walking animations for each character by searching and selecting the perfect animation sequences in the Reallusion actorcore store. He returned to the iClone Connector to import and apply separate motion captures to the characters, completing animations for the scene.

The last 3D step was to adjust lighting. For tips on how to light in Omniverse, check out JSFILMZ’s live-streamed tutorial, which offers Omniverse know-how and his lighting technique.

“Cyberpunk Short Film” by 3D artist JSFILMZ.

According to JSFILMZ, adding and manipulating lights revealed another advantage of using Machinima: the ability to conveniently switch between real-time ray-traced mode for more fluid movement in the viewport and the interactive path-traced mode for the most accurate, detailed view.

He then exported final renders with ray tracing using the Omniverse RTX Renderer, which is powered by NVIDIA RTX or GeForce RTX GPUs.

Working with multiple 3D applications connected by Omniverse saved JSFILMZ countless hours of rendering, downloading files, converting file types, reuploading and more. “It’s so crazy that I can do all this, all at home,” he said.

Completing Cyberpunk Short Film required editing and color correction in DaVinci Resolve.

The NVIDIA hardware encoder enables speedy exports.

Color grading, video editing and color scope features deployed by JSFILMZ are all accelerated with his GPU, allowing for quick edits. And the NVIDIA hardware encoder and decoder makes the GPU-accelerated export very fast.

And with that, Cyberpunk Short Film was ready for viewing.

3D artists can benefit from JSFILMZ’s NVIDIA Omniverse tutorial YouTube playlist. It’s an extensive overview of the Omniverse platform for creators, covering the basics from installation and set up to in-app features such as lighting, rendering and animating.

3D artist and YouTube content creator Jae Solina, aka JSFILMZ.

JSFILMZ teaches 3D creative workflows specializing in NVIDIA Omniverse and Unreal Engine 5 on his YouTube channel and via Udemy courses.

Learn more about NVIDIA Omniverse, including tips, tricks and more on the Omniverse YouTube channel. For additional support, explore the Omniverse forums or join the Discord server to chat with the community. Check out the Omniverse Twitter, Instagram and Medium page to stay up to date.

Follow NVIDIA Studio on Instagram, Twitter and Facebook. Access tutorials on the Studio YouTube channel and get updates directly in your inbox by subscribing to the NVIDIA Studio newsletter.

The post 3D Artist Jae Solina Goes Cyberpunk This Week ‘In the NVIDIA Studio’ appeared first on NVIDIA Blog.

Read More

NVIDIA Accelerates Open Data Center Innovation

NVIDIA today became a founding member of the Linux Foundation’s Open Programmable Infrastructure (OPI) project, while making its NVIDIA DOCA networking software APIs widely available to foster innovation in the data center.

Businesses are embracing open data centers, which require applications and services that are easily integrated with other solutions for simplified, lower-cost and sustainable management. Moving to open NVIDIA DOCA will help develop and nurture broad and vibrant DPU ecosystems and power unprecedented data center transformation.

The OPI project aims to create a community-driven, standards-based, open ecosystem for accelerating networking and other data center infrastructure tasks using DPUs.

DOCA includes drivers, libraries, services, documentation, sample applications and management tools to speed up and simplify the development and performance of applications. It allows for flexibility and portability for BlueField applications written using accelerated drivers or low-level libraries, such as DPDK, SPDK, Open vSwitch or Open SSL. We plan to continue this support. As part of OPI, developers will be able to create a common programming layer to support many of these open drivers and libraries  with DPU acceleration.

DOCA library APIs are already publicly available and documented for developers. Open licensing of these APIs will ensure that applications developed using DOCA will support BlueField DPUs as well as those from other providers.

NVIDIA DOCA stack
DOCA has always been built on an open foundation. Now NVIDIA is opening the APIs to the DOCA libraries and plans to add OPI support.

Expanding Use of DPUs

AI, containers and composable infrastructure are increasingly important for enterprise and cloud data centers. This is driving the use of DPUs in servers to support software-defined, hardware-accelerated networking, east-west traffic and zero-trust security.

Only the widespread deployment of DPUs such as NVIDIA BlueField can support the ability to offload, accelerate and isolate data center workloads, including networking, storage, security and DevOps management.

NVIDIA’s history of open innovation over the decades includes engaging with leading consortiums, participating in standards committees and contributing to a range of open source software and communities.

We contribute frequently to open source and open-license projects and software such as the Linux kernel, DPDK, SPDK, NVMe over Fabrics, FreeBSD, Apache Spark, Free Range Routing, SONiC, Open Compute Project and other areas covering networking, virtualization, containers, AI, data science and data encryption.

NVIDIA is often among the top three code contributors to many releases of Linux and DPDK. And we’ve historically included an open source version of our networking drivers in the Linux kernel.

With OPI, customers, ISVs, infrastructure appliance vendors and systems integrators will be able to create applications for BlueField DPUs using DOCA to gain the best possible performance and easiest developer experience for accelerated data center infrastructure.

The post NVIDIA Accelerates Open Data Center Innovation appeared first on NVIDIA Blog.

Read More

The King’s Swedish: AI Rewrites the Book in Scandinavia

If the King of Sweden wants help drafting his annual Christmas speech this year, he could ask the same AI model that’s available to his 10 million subjects.

As a test, researchers prompted the model, called GPT-SW3, to draft one of the royal messages, and it did a pretty good job, according to Magnus Sahlgren, who heads research in natural language understanding at AI Sweden, a consortium kickstarting the country’s journey into the machine learning era.

“Later, our minister of digitalization visited us and asked the model to generate arguments for political positions and it came up with some really clever ones — and he intuitively understood how to prompt the model to generate good text,” Sahlgren said.

Early successes inspired work on an even larger and more powerful version of the language model they hope will serve any citizen, company or government agency in Scandinavia.

A Multilingual Model

The current version packs 3.6 billion parameters and is smart enough to do a few cool things in Swedish. Sahlgren’s team aims to train a state-of-the-art model with a whopping 175 billion parameters that can handle all sorts of language tasks in the Nordic languages of Swedish, Danish, Norwegian and, it hopes, Icelandic, too.

For example, a startup can use it to automatically generate product descriptions for an e-commerce website given only the products’ names. Government agencies can use it to quickly classify and route questions from citizens.

Companies can ask it to rapidly summarize reports so they can react fast. Hospitals can run distilled versions of the model privately on their own systems to improve patient care.

“It’s a foundational model we will provide as a service for whatever tasks people want to solve,” said Sahlgren, who’s been working at the intersection of language and machine learning since he earned his Ph.D. in computational linguistics in 2006.

Permission to Speak Freely

It’s a capability increasingly seen as a strategic asset, a keystone of digital sovereignty in a world that speaks thousands of languages across nearly 200 countries.

Most language services today focus on Chinese or English, the world’s two most-spoken tongues. They’re typically created in China or the U.S., and they aren’t free.

“It’s important for us to have models built in Sweden for Sweden,” Sahlgren said.

Small Team, Super System

“We’re a small country and a core team of about six people, yet we can build a state-of-the-art resource like this for people to use,” he added.

That’s because Sweden has a powerful engine in BerzeLiUs, a 300-petaflops AI supercomputer at Linköping University. It trained the initial GPT-SW3 model using just 16 of the 60 nodes in the NVIDIA DGX SuperPOD.

The next model may exercise all the system’s nodes. Such super-sized jobs require super software like the NVIDIA NeMo Megatron framework.

“It lets us scale our training up to the full supercomputer, and we’ve been lucky enough to have access to experts in the NeMo development team — without NVIDIA it would have been so much more complicated to come this far,” he said.

A Workflow for Any Language

NVIDIA’s engineers created a recipe based on NeMo and an emerging process called p-tuning that optimizes massive models fast, and it’s geared to work with any language.

In one early test, a model nearly doubled its accuracy after NVIDIA engineers applied the techniques.

Magnus Sahlgren, AI Sweden
Magnus Sahlgren

What’s more, it requires one-tenth the data, slashing the need for tens of thousands of hand-labeled records. That opens the door for users to fine-tune a model with the relatively small, industry-specific datasets they have at hand.

“We hope to inspire a lot of entrepreneurship in industry, startups and the public using our technology to develop their own apps and services,” said Sahlgren.

Writing the Next Chapter

Meanwhile, NVIDIA’s developers are already working on ways to make the enabling software better.

One test shows great promise for training new capabilities using widely available English datasets into models designed for any language. In another effort, they’re using the p-tuning techniques in inference jobs so models can learn on the fly.

Zenodia Charpy, a senior solutions architect at NVIDIA based in Gothenburg, shares the enthusiasm of the AI Sweden team she supports. “We’ve only just begun trying new and better methods to tackle these large language challenges — there’s much more to come,” she said.

The GPT-SW3 model will be made available by the end of year via an early access program. To apply, contact francisca.hoyer@ai.se.

The post The King’s Swedish: AI Rewrites the Book in Scandinavia appeared first on NVIDIA Blog.

Read More

Smart Utility Vehicle: NIO ES7 Redefines Category with Intelligent, Versatile EV Powered by NVIDIA DRIVE Orin

Accounting for nearly half of global vehicle sales in 2021, SUVs have grown in popularity given their versatility. Now, NIO aims to amp up the volume further.

This week, the electric automaker unveiled the ES7 SUV, purpose-built for the intelligent vehicle era. Its sporty yet elegant body houses an array of cutting-edge technology, including the Adam autonomous driving supercomputer, powered by NVIDIA DRIVE Orin.

SUVs gained a foothold among consumers in the late 1990s as useful haulers for people and cargo. As powertrain and design technology developed, the category has flourished, with some automakers converting their fleets to mostly SUVs and trucks.

With the ES7, NIO is adding even more to the SUV category, packing it with plenty of features to please any driver.

The intelligent EV sports 10 driving modes, in addition to autonomous capabilities that will gradually cover expressways, urban areas, parking, and battery swapping. It also includes a camping mode that maintains a comfortable cabin temperature with lower power consumption and immersive audio and lighting.

Utility Meets Technology

The technology inside the ES7 is the core of what makes it a category-transforming vehicle.

The SUV is the first to incorporate NIO’s watchtower sensor design, combining 33 high-performance lidars, radars, cameras and ultrasonics arranged in around the vehicle. Data from these sensors is fused and processed by the centralized Adam supercomputer for robust surround perception.

With more than 1,000 trillion operations per second (TOPS) of performance provided by four DRIVE Orin systems-on-a-chip (SoCs), Adam can power a wide range of intelligent features in addition to perception, with enough headroom to add new capabilities over the air.

Using multiple SoCs, Adam integrates the redundancy and diversity necessary for safe autonomous operation. The first two SoCs process the 8 gigabytes of data produced every second by the vehicle’s sensor set.

The third Orin serves as a backup to ensure the system can operate safely in any situation. And the fourth enables local training, improving the vehicle with fleet learning and personalizing the driving experience based on individual user preferences.

With high-performance compute at its center, the ES7 delivers everything an SUV customer could need, and more.

A Growing Lineup

The ES7 joins the ET7 and ET5 as the third NIO vehicle built on the DRIVE Orin-powered Adam supercomputer, adding even greater selection for customers seeking a more intelligent driving experience.

NIO intends to have vehicle offerings in more than two dozen countries and regions by 2025 to bring one of the most advanced AI platforms to more customers.

Preorders for the ES7 SUV are now on the NIO app, with deliveries slated to begin in August.

The post Smart Utility Vehicle: NIO ES7 Redefines Category with Intelligent, Versatile EV Powered by NVIDIA DRIVE Orin appeared first on NVIDIA Blog.

Read More

AI for Personalized Health: Startup Advances Precision Medicine for COVID-19, Chronic Diseases

At a time when much about COVID-19 remained a mystery, U.K.-based PrecisionLife used AI and combinatorial analytics to discover new genes associated with severe symptoms and hospitalizations for patients.

The techbio company’s study, published in June 2020, pinpoints 68 novel genes associated with individuals who experienced severe disease from the virus. Over 70 percent of these targets have since been independently validated in global scientific literature as genetic risk factors for severe COVID-19 symptoms.

The startup was able to perform this early and accurate analysis using the first small COVID-19 patient dataset reported in the UK Biobank, with the help of AI, trained on NVIDIA A40 GPUs and backed by CUDA software libraries. PrecisionLife’s combinatorial analytics approach identifies interactions between genetic variants and other clinical or epidemiological factors in patients.

Results are shown in the featured image above, which depicts the disease architecture stratification of a severe COVID-19 patient population at the pandemic’s outset. Colors represent patient subgroups. Circles represent disease-associated genetic variants. And lines represent co-associated variants.

PrecisionLife technology helps researchers better understand complex disease biology at a population and personal level. Beyond COVID-19, the PrecisionLife analytics platform has been used to identify targets for precision medicine for more than 30 chronic diseases, including type 2 diabetes and ALS.

The company is a member of NVIDIA Inception, a free program that supports startups revolutionizing industries with cutting-edge technology.

Unique Disease Findings

Precision medicine considers an individual’s genetics, environment and lifestyle when selecting the treatment that could work best for them. PrecisionLife focuses on identifying how combinations of such factors impact chronic diseases.

The PrecisionLife platform enables a deeper understanding of the biology that leads to chronic disease across subgroups of patients. It uses combinatorial analytics to draw insights from the genomics and clinical history of patients — pulled from datasets provided by national biobanks, research consortia, patient charities and more.

Due to the inherent heterogeneity of chronic diseases, patients with the same diagnosis don’t necessarily experience the same causes, trajectories or treatments of disease.

The PrecisionLife platform identifies subgroups — within large patient populations — that have matching disease drivers, disease progression and treatment response. This can help researchers to select the right targets for drug development, treatments for individuals, as well as patients for clinical trials.

“Chronic disease is a complex space — a multi-genetic, multi-environmental problem with multiple patient subgroups,” said Mark Strivens, chief technology officer at PrecisionLife. “We work on technology to tackle problems that previous techniques couldn’t solve, and our unique disease findings will lead to a different set of therapeutic opportunities to best treat individuals.”

PrecisionLife technology is different from traditional analytical methods, like genome-wide association studies, which work best when single genetic variants are responsible for most of the disease risk. Instead, PrecisionLife offers combinatorial analytics, discovering significant combinations of multiple genetic and environmental factors.

The PrecisionLife platform can analyze data from 100,000 patients in just hours using NVIDIA A40 GPUs, a previously impossible feat, according to Strivens.

Plus, being a member of NVIDIA Inception gives the PrecisionLife team access to technical resources, hardware discounts and go-to-market support.

“Inception gives us access to technical expertise and connects us with other data-driven organizations that are a part of NVIDIA’s biotechnology AI ecosystem,” Strivens said. “Training from the NVIDIA Deep Learning Institute reduces the time it takes for our team members to ramp up learning a specific branch of programming.”

As a part of the groundbreaking U.K. life sciences community, PrecisionLife has access to a hub of healthcare innovation and specialist talent, Strivens said. Looking forward, the company plans to deliver new disease insights based on combinatorial analytics all across the globe.

Learn more about PrecisionLife and apply to join NVIDIA Inception.

Subscribe to NVIDIA healthcare news.

The post AI for Personalized Health: Startup Advances Precision Medicine for COVID-19, Chronic Diseases appeared first on NVIDIA Blog.

Read More