Support for New NVIDIA RTX 3080 Ti, 3070 Ti Studio Laptops Now Available in February Studio Driver

Support for the new GeForce RTX 3080 Ti and 3070 Ti Laptop GPUs is available today in the February Studio driver.

Updated monthly, NVIDIA Studio drivers support NVIDIA tools and optimize the most popular creative apps, delivering added performance, reliability and speed to creative workflows.

Creatives will also benefit from the February Studio driver with enhancements to their existing creative apps as well as the latest app releases, including a major update to Maxon’s Redshift renderer.

The NVIDIA Studio platform is being rapidly adopted by aspiring artists, freelancers and creative professionals who seek to take their projects to the next level. The next generation of Studio laptops further powers their ambitions.

Creativity Unleashed — 3080 Ti and 3070 Ti Studio Laptops

Downloading the February Studio driver will help unlock massive time savings, especially for 3080 Ti and 3070 Ti GPU owners, in essential creative apps.

Blender renders are exceptionally fast on GeForce RTX 3080 Ti and 3070 Ti GPUs with RT Cores powering hardware-accelerated ray tracing.

GeForce RTX 3080 Ti GPU laptops achieve up to 10x faster rendering speeds than the MacBook Pro 16 M1 Max.

Autodesk aficionados with a GeForce RTX 3080 Ti GPU equipped laptop can render the heaviest of scenes much faster, in this example saving over an hour.

The GeForce RTX 3080 Ti laptop GPU renders up to 7x faster in Autodesk Maya than the MacBook Pro 16 M1 Max.

Video production specialists in REDCINE-X PRO have the freedom to edit in real-time with elevated FPS, resulting in more accurate playback, requiring far less time in the editing bay.

Edit RED RAW video faster with GeForce RTX 3080 Ti laptop GPU.

Creators can move at the speed of light with the 2022 lineup of Studio laptops and desktops.

MSI has announced the Creator Z16 and Creator Z17 Studio laptops, set for launch in March, with up to GeForce RTX 3080 Ti Laptop GPUs.

The MSI Z17 True Pixel display features QHD+ resolution, 100 percent DCI-P3 (typical) color gamut, factory-calibrated Delta-E < 2 out-of-the-box accuracy and True Color Technology.

ASUS’s award-winning ZenBook Pro Duo, coming later this year, sports a GeForce RTX 3060 GPU, plus a 15.6-inch 4K UHD OLED touchscreen and secondary 4K screen, unlocking numerous creative possibilities.

ASUS worked closely with third-party developers — including professional video-editing software developer Corel, with more to come — to optimize ScreenPad Plus for creative workflows and productivity.

The Razer Blade 17 and 15, available now, come fully loaded with a GeForce RTX 3080 Ti GPU and 32GB of memory — and they’re configurable with a beautiful 4K 144hz, 100-percent DCI-P3 display. Razer Blade 14 will launch on Feb. 17.

The Razer Blade 17 features a stunning 4K display and a 144Hz UHD refresh rate for creative professionals who want their visions to truly come to life.

GIGABYTE’s newly refreshed AERO 16 and 17 Studio laptops, equipped with GeForce RTX 3070 Ti and 3080 Ti GPUS, are also now available.

The AERO 17 sports a 3mm ultra-thin bezel and X-Rite Pantone-certified 4K HDR display with Adobe RGB 100 percent color gamut.

These creative machines power RTX-accelerated tools, including NVIDIA Omniverse, Canvas and Broadcast, making next-generation AI technology even more accessible while reducing and removing tedious work.

Fourth-generation Max-Q technologies — including CPU Optimizer and Rapid Core Scaling — maximize creative performance in remarkably thin laptop designs.

Stay tuned for more Studio product announcements in the coming months.

Shift to Redshift RT 

Well-known to 3D artists, Maxon’s Redshift renderer is powerful, biased and GPU accelerated — built to exceed the demands of contemporary high-end production rendering.

Redshift recently launched Redshift RT — a real-time rendering feature — in beta, allowing 3D artists to omit unnecessary wait times for renders to finalize.

Redshift RT runs exclusively on NVIDIA RTX GPUs, bolstered by RT Cores, powering hardware-accelerated, interactive ray tracing.

Redshift RT, which is part of the current release, enables a more natural, intuitive way of working. It offers increased freedom to try different options for creating spectacular content, and is best used for scene editing and rendering previews. Redshift Production remains the highest possible quality and control renderer.

Redshift RT technology is integrated in the Maxon suite of creative apps including Cinema 4D, and is available for Autodesk 3ds Max and Maya, Blender, Foundry Katana and SideFX Houdini, as well as architectural products Vectorworks, Archicad and Allplan, dramatically speeding up all types of visual workflows.

With so many options, now’s the time to take a leap into 3D. Check out our Studio YouTube channel for standouts, tutorials, tips and tricks from industry-leading artists on how to get started.

Get inspired by the latest NVIDIA Studio Standouts video featuring some of our favorite digital art from across the globe.

Follow NVIDIA Studio on Facebook, Twitter and Instagram for the latest information on creative app updates, new Studio apps, creator contests and more. Get updates directly to your inbox by subscribing to the Studio newsletter.

The post Support for New NVIDIA RTX 3080 Ti, 3070 Ti Studio Laptops Now Available in February Studio Driver appeared first on The Official NVIDIA Blog.

Read More

Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality

Interior renovations have never looked this good.

TCImage, a studio based in Taipei, is showcasing compelling landscape and architecture designs by creating realistic 3D graphics and presenting them in virtual, augmented, and mixed reality — collectively known as extended reality, or XR.

For clients to get a better understanding of the designs, TCImage produces high-quality, 3D visualizations of the projects and puts them in a virtual environment. This lets users easily review and engage with the model in full scale, so they can get to the final design faster.

To keep up with client expectations and deliver quality content, the team at TCImage needs advanced tools and technologies that help them make design concepts feel like a reality.

With NVIDIA RTX technology, CloudXR, Deep Learning Super Sampling (DLSS) and NVIDIA Omniverse, TCImage is at the forefront of delivering stunning renders and XR experiences that allow clients to be virtually transported to the renovation of their dreams.

Bringing Design Details to Life With RTX

To make the realistic details stand out in a design, its CEO Leo Chou and his team must create all 3D visuals in high resolution. During the design process, the team uses popular applications like Autodesk 3ds Max, Autodesk Revit, Trimble SketchUp and Unreal Engine 4. Chou initially tried using a consumer-level PC to render 3D graphics, but it would take up to three hours just to render a single frame of a 4K image.

Now, with an enterprise-grade PC powered by an NVIDIA RTX 6000 graphics card, he can render the same 4K frame within 30 minutes. NVIDIA RTX provides Chou with enhanced efficiency and performance, which allow him to achieve real-time rendering of final images.

“I was thrilled by the performance of RTX technology — it’s more powerful, allowing me to establish a competitive edge in the industry by making real-time ray tracing come true,” said Chou.

Looking Around Unbound With CloudXR

To show off these dazzling 3D visuals to customers, TCImage uses CloudXR.

With this extended reality streaming technology, Chou and his team can share projects inside an immersive and seamless experience, allowing them to efficiently communicate project designs to customers. The team can also present their designs from any location, as they can stream the untethered XR experiences from the cloud.

Built on RTX technology, CloudXR enables TCImage to stream high-resolution, real-time graphics and provide a more interactive experience for clients. NVIDIA DLSS also improves the XR experience by rendering more frames per second, which is especially helpful during the design review process.

With NVIDIA DLSS, TCImage can tap into the power of AI to boost frame rates and create sharp images for the XR environment. This helps the designers and clients see a preview of the 3D model with minimal latency as the user moves and rotates inside the environment.

“By using NVIDIA CloudXR, I can freely and easily present my projects, artwork and portfolio to customers anytime, anywhere while maintaining the best quality of content,” said Chou. “I can even edit the content in real time, based on the customers’ requirements.”

According to Chou, TCImage clients who have experienced the improved workflow were impressed by how much time and cost savings the new technology has provided. It’s also created more business opportunities for the firm.

Designing Buildings in Virtual Worlds

TCImage has started to explore design workflows in the virtual world with NVIDIA Omniverse, a platform for 3D simulation and design collaboration. In addition to using real-time ray tracing and DLSS in Omniverse, Chou played around with optimizing his virtual scenes with the Omniverse Create and Omniverse View applications.

“Omniverse is flexible enough to integrate with major graphics software, as well as allow instantaneous content updates and changes without any extra effort by our team,” said Chou.

In Omniverse Create, Chou can enhance creative workflows by connecting to leading applications to produce architectural designs. He also uses existing materials in Omniverse, such as grass brush samples, to create exterior landscapes and vegetation.

And with Omniverse View, Chou uses lighting tools such as Sun Study, which allows him to review designs with accurate sunlight.

Learn more about TCImage and check out Chou’s recent tutorial in Omniverse:

The post Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality appeared first on The Official NVIDIA Blog.

Read More

Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways

Preventable train accidents like the 1985 disaster outside Tel Aviv in which a train collided with a school bus, killing 19 students and several adults, motivated Shahar Hania and Elen Katz to help save lives with technology.

They founded Rail Vision, an Israeli startup that creates obstacle-detection and classification systems for the global railway industry

The systems use advanced electro-optic sensors to alert train drivers and railway control centers when a train approaches potential obstacles — like humans, vehicles, animals or other objects — in real time, and in all weather and lighting conditions.

Rail Vision is a member of NVIDIA Inception — a program designed to nurture cutting-edge startups — and an NVIDIA Metropolis partner. The company uses the NVIDIA Jetson AGX Xavier edge AI platform, which provides GPU-accelerated computing in a compact and energy-efficient module, and the NVIDIA TensorRT software development kit for high-performance deep learning inference.

Pulling the Brakes in Real Time

A train’s braking distance — or the distance a train travels between when its brakes are pulled and when it comes to a complete stop — is usually so long that by the time a driver spots a railway obstacle, it could be too late to do anything about it.

For example, the braking distance for a train traveling 100 miles per hour is 800 meters, or about a half-mile, according to Hania. Rail Vision systems can detect objects on and along tracks from up to two kilometers, or 1.25 miles, away.

By sending alerts, both visual and acoustic, of potential obstacles in real time, Rail Vision systems give drivers over 20 seconds to respond and make decisions on braking.

The systems can also be integrated with a train’s infrastructure to automatically apply brakes when an obstacle is detected, even without a driver’s cue.

“Tons of deep learning inference possibilities are made possible with NVIDIA GPU technology,” Hania said. “The main advantage of using the NVIDIA Jetson platform is that there are lots of goodies inside — compressors, modules for optical flow — that all speed up the embedding process and make our systems more accurate.”

Boosting Maintenance, in Addition to Safety

In addition to preventing accidents, Rail Vision systems help save operational time and costs spent on railway maintenance — which can be as high as $50 billion annually, according to Hania.

If a railroad accident occurs, four to eight hours are typically spent handling the situation — which prevents other trains from using the track, said Hania.

Rail Vision systems use AI to monitor the tracks and prevent such workflow slow-downs, or quickly alert operators when they do occur — giving them time to find alternate routes or plans of action.

The systems are scalable and deployable for different use cases — with some focused solely on these maintenance aspects of railway operations.

Watch a Rail Vision system at work.

The post Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways appeared first on The Official NVIDIA Blog.

Read More

How Smart Hospital Technology Can Help Cut Down on Medical Errors

Despite the feats of modern medicine, as many as 250,000 Americans die from medical errors each year — more than 6 times the number killed in car accidents.

Smart hospital AI can help avoid some of these fatalities in healthcare, just as computer vision-based driver assistance systems can improve road safety, according to AI leader Fei-Fei Li.

Whether through surgical instrument omission, a wrong drug prescription or a patient safety issue when clinicians aren’t present, “there’s just all kinds of errors that could be introduced, unintended, despite protocols that have been put together to avoid them,” said Li, computer science professor and co-director of the Stanford Institute for Human-Centered Artificial Intelligence, in a talk at the recent NVIDIA GTC. “Humans are still humans.”

By endowing healthcare spaces with smart sensors and machine learning algorithms, Li said, clinicians can help cut down medical errors and provide better patient care.

“We have to make sense of what we sense” with sensor data, said Li. “This brings in machine learning and deep learning algorithms that can turn sensed data into medical insights that are really important to keep our patients safe.”

To hear from other experts in deep learning and medicine, register free for the next GTC, running online March 21-24. GTC features talks from dozens of healthcare researchers and innovators harnessing AI for smart hospitals, drug discovery, genomics and more.

Sensor Solutions Bring Ambient Intelligence to Clinicians

Li’s interest in AI for healthcare delivery was sparked a decade ago when she was caring for a sick parent.

“The more I spent my time in ICUs and hospital rooms and even at home caring for my family, the more I saw the analogy between self-driving technology and healthcare delivery,” she said.

Her vision of sensor-driven “ambient intelligence,” outlined in a Nature paper, covers both the hospital and the home. It offers insights in operating rooms as well as the daily living spaces of individuals with chronic disease.

For example, ICU patients need a certain amount of movement to help their recovery. To ensure that patients are getting the right amount of mobility, researchers are developing smart sensor systems to automatically tag patient movements and understand their mobility levels while in critical care.

Another project used depth sensors and convolutional neural networks to assess whether clinicians were properly using hand sanitizer when entering and exiting patient rooms.

Outside of the hospital, as the global population continues to age, wearable sensors can help ensure seniors are aging healthily by monitoring mobility, sleep and medicine compliance.

The next challenge, Li said, is advancing computer vision to classify more complex human movement.

“We’re not content with these coarse activities like walking and sleeping,” she said. “What’s more important clinically are fine-grained activities.”

Protecting Patient, Caregiver Privacy 

When designing smart hospital technology, Li said, it’s important that developers prioritize privacy and security of patients, clinicians and caretakers.

“From a computer vision point of view, blurring and masking has become more and more important when it comes to human signals,” she said. “These are really important ways to mitigate private information and personal identity from being inadvertently leaked.”

In the field of data privacy, Li said, federated learning is another promising solution to protect confidential information.

Throughout the process of developing AI for healthcare, she said, developers must take a multi-stakeholder approach, involving patients, clinicians, bioethicists and government agencies in a collaborative environment.

“At the end of the day, healthcare is about humans caring for humans,” said Li. “This technology should not replace our caretakers, replace our families or replace our nurses and doctors. It’s here to augment and enhance humanity and give more dignity back to our patients.”

Watch the full talk on NVIDIA On-Demand, and sign up for GTC to learn about the latest in AI and healthcare.

The post How Smart Hospital Technology Can Help Cut Down on Medical Errors appeared first on The Official NVIDIA Blog.

Read More

Nearly 80 Percent of Financial Firms Use AI to Improve Services, Reduce Fraud

From the largest firms trading on Wall Street to banks providing customers with fraud protection to fintechs recommending best-fit products to consumers, AI is driving innovation across the financial services industry.

New research from NVIDIA found that 78 percent of financial services professionals state that their company uses accelerated computing to deliver AI-enabled applications through machine learning, deep learning or high performance computing.

The survey results, detailed in NVIDIA’s “State of AI in Financial Services” report, are based on responses from over 500 C-suite executives, developers, data scientists, engineers and IT teams working in financial services.

AI Prevents Fraud, Boosts Investments

With more than 70 billion real-time payment transactions processed globally in 2020, financial institutions need robust systems to prevent fraud and reduce costs. Accordingly, fraud detection involving payments and transactions was the top AI use case across all respondents at 31 percent, followed by conversational AI at 28 percent and algorithmic trading at 27 percent.

There was a dramatic increase in the percentage of financial institutions investing in AI use cases year-over-year. AI for underwriting increased fourfold, from 3 percent penetration in 2021 to 12 percent this year. Conversational AI jumped from 8 to 28 percent year-over-year, a 3.5x rise.

Meanwhile, AI-enabled applications for fraud detection, know your customer (KYC) and anti-money laundering (AML) all experienced growth of at least 300 percent in the latest survey. Nine of 13 use cases are now utilized by over 15 percent of financial services firms, whereas none of the use cases exceeded that penetration mark in last year’s report.

Future investment plans remain steady for top AI cases, with enterprise investment priorities for the next six to 12 months marked in green.

Top Current AI Use Cases in Financial Services (Ranked by Industry Sector)

Top Current AI Use Cases in Financial Services (Ranked by Industry Sector)
Green highlighted text signifies top AI use cases for investment in next six to 12 months.

Overcoming AI Challenges

Financial services professionals highlighted the main benefits of AI in yielding more accurate models, creating a competitive advantage and improving customer experience. Overall, 47 percent said that AI enables more accurate models for applications such as fraud detection, risk calculation and product recommendations.

However, there are challenges in achieving a company’s AI goals. Only 16 percent of survey respondents agreed that their company is spending the right amount of money on AI, and 37 percent believed “lack of budget” is the primary challenge in achieving their AI goals. Additional obstacles included too few data scientists, lack of data, and explainability, with a third of respondents listing each option.

Financial institutions such as Munich Re, Scotiabank and Wells Fargo have developed explainable AI models to explain lending decisions and construct diversified portfolios.

Biggest Challenges in Achieving Your Company’s AI Goals (by Role)Biggest Challenges in Achieving Your Company’s AI Goals (by Role)

Cybersecurity, data sovereignty, data gravity and the option to deploy on-prem, in the cloud or using hybrid cloud are areas of focus for financial services companies as they consider where to host their AI infrastructure. These preferences are extrapolated from responses to where companies are running most of their AI projects, with over three-quarters of the market operating on either on-prem or hybrid instances.

Where Financial Services Companies Run Their AI WorkloadsWhere Financial Services Companies Run Their AI Workloads

Executives Believe AI Is Key to Business Success

Over half of C-suite respondents agreed that AI is important to their company’s future success. The top total responses to the question “How does your company plan to invest in AI technologies in the future?” were:

  1. Hiring more AI experts (43 percent)
  2. Identifying additional AI use cases (36 percent)
  3. Engaging third-party partners to accelerate AI adoption (36 percent)
  4. Spending more on infrastructure (36 percent)
  5. Providing AI training to staff (32 percent)

However, only 23 percent overall of those surveyed believed their company has the capability and knowledge to move an AI project from research to production. This indicates the need for an end-to-end platform to develop, deploy and manage AI in enterprise applications.

Read the full “State of AI in Financial Services 2022” report to learn more.

Explore NVIDIA’s AI solutions and enterprise-level AI platforms driving the future of financial services.

The post Nearly 80 Percent of Financial Firms Use AI to Improve Services, Reduce Fraud appeared first on The Official NVIDIA Blog.

Read More

Let Me Upgrade You: GeForce NOW Adds Resolution Upscaling and More This GFN Thursday

GeForce NOW is taking cloud gaming to new heights.

This GFN Thursday delivers an upgraded streaming experience as part of an update that is now available to all members. It includes new resolution upscaling options to make members’ gaming experiences sharper, plus the ability to customize streaming settings in session.

The GeForce NOW app is fully releasing on select LG TVs, following a successful beta. To celebrate the launch, for a limited time, those who purchase a qualifying LG TV will also receive a six-month Priority membership to kickstart their cloud gaming experience.

Additionally, this week brings five games to the GeForce NOW library.

Upscale Your Gaming Experience

The newest GeForce NOW update delivers new resolution upscaling options — including an AI-powered option for members with select NVIDIA GPUs.

Resolution upscaling update on GeForce NOW
New year, new options for January’s GeForce NOW update.

This feature, now available to all members with the 2.0.37 update, gives gamers with network bandwidth limitation or higher resolution displays sharper graphics that match the native resolution of their monitor or laptop.

Resolution upscaling works by applying sharpening effects that reduce visible blurriness while streaming. It can be applied to any game and enabled via the GeForce NOW settings in native PC and Mac apps.

Three upscaling modes are now available. Standard is enabled by default and has minimal impact on system performance. Enhanced provides a higher quality upscale, but may cause some latency depending on your system specifications. AI Enhanced, available to members playing on PC with select NVIDIA GPUs and SHIELD TVs, leverages a trained neural network model along with image sharpening for a more natural look. These new options can be adjusted mid-session.

Learn more about the new resolution upscaling options.

Stream Your Way With Custom Settings

The upgrade brings some additional benefits to members.

Custom streaming quality settings on the PC and Mac apps have been a popular way for members to take control of their streams — including bit rate, VSync and now the new upscaling modes. The update now enables members to adjust some of the streaming quality settings in session using the GeForce NOW in-game overlay. Bring up the overlay by pressing Ctrl+G > Settings > Gameplay to access these settings while streaming.

The update also comes with an improved web-streaming experience on play.geforcenow.com by automatically assigning the ideal streaming resolution for devices that are unable to decode at high streaming bitrates. Finally, there’s also a fix for launching directly into games from desktop shortcuts.

LG Has Got Game With the GeForce NOW App

LG Electronics, the first TV manufacturer to release the GeForce NOW app in beta, is now bringing cloud gaming to several LG TVs at full force.

Owners of LG 2021 4K TV models including OLED, QNED, NanoCell and UHD TVs can now download the fully launched GeForce NOW app in the LG Content Store. The experience requires a gamepad and gives gamers instant access to nearly 35 free-to-play games, like Apex Legends and Destiny 2, as well as more than 800 PC titles from popular digital stores like Steam, Epic Games Store, Ubisoft Connect and Origin.

The GeForce NOW app on LG OLED TVs delivers responsive gameplay and gorgeous, high-quality graphics at 1080p and 60 frames per second. On these select LG TVs, with nothing more than a gamepad, you can enjoy stunning ray-traced graphics and AI technologies with NVIDIA RTX ON. Learn more about support for the app for LG TVs on the system requirements page under LG TV.

GeForce NOW app for LG
Get your game on directly through an LG TV with a six-month GeForce NOW Priority membership.

In celebration of the app’s full launch and the expansion of devices supported by GeForce NOW, qualifying LG purchases from Feb. 1 to March 27 in the United States come bundled with a sweet six-month Priority membership to the service.

Priority members experience legendary GeForce PC gaming across all of their devices, as well as benefits including priority access to gaming servers, extended session lengths and RTX ON for cinematic-quality in-game graphics.

To collect a free six-month Priority membership, purchase a qualifying LG TV and submit a claim. ​Upon claim approval, you’ll receive a GeForce NOW promo code via email. Create an NVIDIA account for free or sign in to your existing GeForce NOW account to redeem the gifted membership.

This offer is available to those who purchase applicable 2021 model LG 4K TVs in select markets during the promotional period. Current GeForce NOW promotional members are not eligible for this offer. Availability and deadline to claim free membership varies by market. Consult LG’s official country website, starting Feb. 1, for full details. Terms and conditions apply.

It’s Playtime

Mortal Online 2 on GeForce NOW
Explore a massive open world and choose your own path in Mortal Online 2.

Start your weekend with the following five titles coming to the cloud this week:

While you kick off your weekend with gaming fun, we’ve got a question for you this week:

The post Let Me Upgrade You: GeForce NOW Adds Resolution Upscaling and More This GFN Thursday appeared first on The Official NVIDIA Blog.

Read More

Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs

Fisheries collect millions upon millions of fish eggs, protecting them from predators to increase fish yield and support the propagation of endangered species — but an issue with gathering so many eggs at once is that those infected with parasites can put healthy ones at risk.

Jensorter, an Oregon-based startup, has created AI-powered fish egg sorters that can rapidly identify healthy versus unhealthy eggs. The machines, built on the NVIDIA Jetson Nano module, can also detect egg characteristics such as size and fertility status.

The devices then automatically sort the eggs based on these characteristics, allowing Jensorter’s customers in Alaska, the Pacific Northwest and Russia to quickly separate viable eggs from unhealthy ones — and protect them accordingly.

Jensorter is a member of NVIDIA Inception, a program that nurtures cutting-edge startups revolutionizing industries with advancements in AI, data science, high performance computing and more.

Picking Out the Good Eggs

According to Curt Edmondson, patent counsel and CTO of Jensorter, many fisheries aim to quickly dispose of unhealthy eggs to lower the risk of infecting healthy ones.

Using AI, Jensorter machines look at characteristics like color to discern an egg’s health status and determine whether it’s fertilized — at a speed of about 30 milliseconds per egg.

“Our fish egg sorters are achieving a much higher accuracy with the addition of AI powered by NVIDIA Jetson, which is allowing us to create advanced capabilities,” Edmondson said.

The startup offers several machines, each tailored to varying volumes of eggs to be sorted. The Model JH device, optimal for egg volumes of three to 10 million, can sort nearly 200,000 eggs per hour, eliminating the slow and laborious process of hand-picking.

“Using AI to capture and process images of eggs in real time could have great value over the long term,” Edmondson said. “If hatcheries come together and centralize their images in a database, we could identify patterns of egg characteristics that lead to healthy eggs.”

This could help propagate salmon and trout, species that play important roles in their ecosystems and are common food sources for humans, and which are on the decline in many areas, he added.

The Oregon Hatchery Research Center recently used Jensorter devices to conduct an alpha test examining whether smaller eggs lead to healthier fish. In the spring, the center will use the machines to proceed with beta testing in hatcheries, before publishing study results.

Jensorter also plans to create next-generation sorters that are faster still and can detect, count and separate eggs based on their sex, number of zygotes and other metrics that would be useful to fisheries.

Watch a tutorial on how Jensorter equipment works and learn more about NVIDIA Inception.

The post Hatch Me If You Can: Startup’s Sorting Machines Use AI to Protect Healthy Fish Eggs appeared first on The Official NVIDIA Blog.

Read More

UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks

UK Biobank is broadening scientists’ access to high-quality genomic data and analysis by making its massive dataset available in the cloud alongside NVIDIA GPU-accelerated analysis tools.

Used by more than 25,000 registered researchers around the world, UK Biobank is a large-scale biomedical database and research resource with deidentified genetic datasets, along with medical imaging and health record data, from more than 500,000 participants across the U.K.

Regeneron Genetics Center, the high-throughput sequencing center of biotech leader Regeneron, recently teamed up with UK Biobank to sequence and analyze the exomes — all protein-coding portions of the genome — of all the biobank participants.

The Regeneron team used NVIDIA Clara Parabricks, a software suite for secondary genomic analysis of next-generation sequencing data, during the exome sequencing process.

UK Biobank has released 450,000 of these exomes for access by approved researchers, and is now providing scientists six months of free access to Clara Parabricks through its cloud-based Research Analysis Platform. It was developed by bioinformatics platform DNAnexus, which lets scientists use Clara Parabricks running on NVIDIA GPUs in the AWS cloud.

“As demonstrated by Regeneron, GPU acceleration with Clara Parabricks achieves the throughputs, speed and reproducibility needed when processing genomic datasets at scale,” said Dr. Mark Effingham, deputy CEO of UK Biobank. “There are a number of research groups in the U.K. who were pushing for these accelerated tools to be available in our platform for use with our extensive dataset.”

Regeneron Exome Research Accelerated by Clara Parabricks

Regeneron’s researchers used the DeepVariant Germline Pipeline from NVIDIA Clara Parabricks to run their analysis with a model specific to the genetic center’s workflow.

Its researchers identified 12 million coding variants and hundreds of genes associated with health-related traits — certain genes were associated with increased risk for liver disease and eye disease, and others were linked to lower risk of diabetes and asthma.

The unique set of tools the researchers used for high-quality variant detection is available to UK Biobank registered users through the Research Analysis Platform. This capability will allow scientists to harmonize their own exome data with sequenced exome data from UK Biobank by running the same bioinformatics pipeline used to generate the initial reference dataset.

Cloud-Based Platform Improves Equity of Access

Researchers deciphering the genetic codes of humans — and of the viruses and bacteria that infect humans — can often be limited by the computational resources available to them.

UK Biobank is democratizing access by making its dataset open to scientists around the world, with a focus on further extending use by early-career researchers and those in low- and middle-income countries. Instead of researchers needing to download this huge dataset to use on their own compute resources, they can instead tap into UK Biobank’s cloud platform through a web browser.

“We were being contacted by researchers and clinicians who wanted to access UK Biobank data, but were struggling with access to the basic compute needed to work with even relatively small-scale data,” said Effingham. “The cloud-based platform provides access to the world-class technology needed for large-scale exome sequencing and whole genome sequencing analysis.”

Researchers using the platform pay only for the computational cost of their analyses and for storage of new data they generate from the biobank’s petabyte-scale dataset, Effingham said.

Using Clara Parabricks on DNAnexus helps reduce both the time and cost of this genomic analysis, delivering a whole exome analysis that would take nearly an hour of computation on a 32-vCPU machine in less than five minutes — while also reducing cost by approximately 40 percent.

Exome Sequencing Provides Insights for Precision Medicine

For researchers studying links between genetics and disease, exome sequencing is a critical tool — and the UK Biobank dataset includes nearly half a million participant exomes to work with.

The exome is approximately 1.5 percent of the human genome, and consists of all the known genes and their regulatory elements. By studying genetic variation in exomes across a large, diverse population, scientists can better understand the population’s structure, helping researchers address evolutionary questions and describe how the genome works.

With a dataset as large as UK Biobank’s, it is also possible to identify the specific genetic variants associated with inherited diseases, including cardiovascular disease, neurodegenerative conditions and some kinds of cancer.

Exome sequencing can even shed light on potential genetic drivers that might increase or decrease an individual’s risk of severe disease from COVID-19 infection, Effingham said. As the pandemic continues, UK Biobank is adding COVID case data, vaccination status, imaging data and patient outcomes for thousands of participants to its database.

Get started with NVIDIA Clara Parabricks on the DNAnexus-developed UK Biobank Research Analysis Platform. Learn more about the exome sequencing project by registering for this webinar, which takes place Feb. 17 at 8am Pacific.

Subscribe to NVIDIA healthcare news here

Main image shows the freezer facility at UK Biobank where participant samples are stored. Image courtesy of UK Biobank. 

The post UK Biobank Advances Genomics Research with NVIDIA Clara Parabricks appeared first on The Official NVIDIA Blog.

Read More

Animator Lets 3D Characters Get Their Groove on With NVIDIA Omniverse and Reallusion

Editor’s note: This post is a part of our Meet the Omnivore series, which features individual creators and developers who use NVIDIA Omniverse to boost their artistic or engineering processes.

Benny Dee

Benjamin Sokomba Dazhi, aka Benny Dee, has learned the ins and outs of the entertainment industry from many angles — first as a rapper, then as a music video director and now as a full-time animator.

After eight years of self-teaching, Dazhi has mastered the art of animation — landing roles as head animator for the film The Legend of Oronpoto, and as creator and director of the Cartoon Network Africa Dance Challenge, a series of dance-along animations that teaches children African-inspired choreography.

Based in north-central Nigeria, Dazhi is building a team for his indie animation studio, JUST ART, which creates animation films focused on action, sci-fi, horror and humor.

Dazhi uses NVIDIA Omniverse — a physically accurate 3D design collaboration platform available with RTX-powered GPUs and part of the NVIDIA Studio suite of tools for creators — with Reallusion’s iClone and Character Creator to supercharge his artistic workflow.

He uses Omniverse Connectors for Reallusion apps for character and prop creation and animation, set dressing and cinematics.

Music, Movies and Masterful Rendering

From animated music videos to clips for action films, Dazhi has a multitude of projects — and accompanying deadlines.

“The main challenges I faced when trying to meet deadlines were long render times and difficulties with software compatibility, but using an Omniverse Connector for Reallusion’s iClone app has been game-changing for my workflow,” he said.

Using Omniverse, Dazhi accomplishes lighting and materials setup, rendering, simulation and post-production processes.

With these tools, it took Dazhi just four minutes to render this clip of a flying car — a task, he said, that would have otherwise taken hours.

“The rendering speed and photorealistic output quality of Omniverse is a breakthrough — and Omniverse apps like Create and Machinima are very user-friendly,” he said.

Such 3D graphics tools are especially important for the development of indie artists, Dazhi added.

“In Nigeria, there are very few animation studios, but we are beginning to grow in number thanks to easy-to-use tools like Reallusion’s iClone, which is the main animation software I use,” he said.

Dazhi plans to soon expand his studio, working with other indie artists via Omniverse’s real-time collaboration feature. Through his films, he hopes to show viewers “that it’s more than possible to make high-end content as an indie artist or small company.”

See Dazhi’s work in the NVIDIA Omniverse Gallery, and hear more about his creative workflow live during a Twitch stream on Jan. 26 at 11 a.m. Pacific.

Creators can download NVIDIA Omniverse for free and get started with step-by-step tutorials on the Omniverse YouTube channel. For additional resources and inspiration, follow Omniverse on Instagram, Twitter and Medium. To chat with the community, check out the Omniverse forums and join our Discord Server.

The post Animator Lets 3D Characters Get Their Groove on With NVIDIA Omniverse and Reallusion appeared first on The Official NVIDIA Blog.

Read More

Vulkan Fan? Six Reasons to Run It on NVIDIA

Many different platforms, same great performance. That’s why Vulkan is a very big deal.

With the release Tuesday of Vulkan 1.3, NVIDIA continues its unparalleled record of day one driver support for this cross-platform GPU application programming interface for 3D graphics and computing.

Vulkan has been created by experts from across the industry working together at the Khronos Group, an open standards consortium. From the start, NVIDIA has worked to advance this effort. NVIDIA’s Neil Trevett has been Khronos president since its earliest days.

“NVIDIA has consistently been at the forefront of computer graphics with new, enhanced tools, and technologies for developers to create rich game experiences,” said Jon Peddie, president of Jon Peddie Research.

“Their guidance and support for Vulkan 1.3 development, and release of a new compatible driver on day one across NVIDIA GPUs contributes to the successful cross-platform functionality and performance for games and apps this new API will bring,” he said.

With a simpler, thinner driver and efficient CPU multi-threading capabilities, Vulkan has less latency and overhead than alternatives, such as OpenGL or older versions of Direct3D.

If you use Vulkan, NVIDIA GPUs are a no-brainer. Here’s why:

  1. NVIDIA consistently provides industry leadership to evolve new Vulkan functionality and is often the first to make leading-edge computer graphics techniques available to developers. This ensures cutting-edge titles are supported on Vulkan and, by extension, made available to more gamers.
  2. NVIDIA designs hardware to provide the fastest Vulkan performance for your games and applications. For example, NVIDIA GPUs perform up over 30 percent faster than the nearest competition on games such as Doom Eternal with advanced rendering techniques such as ray tracing.
  3. NVIDIA provides the broadest range of Vulkan functionality to ensure you can run the games and apps that you want and need. NVIDIA’s production drivers support advanced features such as ray-tracing and DLSS AI rendering across multiple platforms, including Windows and popular Linux distributions like Ubuntu, Kylin and RHEL.
  4. NVIDIA works hard to be the platform of choice for Vulkan development with tools that are often the first to support the latest Vulkan functionality, encouraging apps and games to be optimized first for NVIDIA. NVIDIA Nsight, our suite of development tools, has integrated support for Vulkan, including debugging and optimizing of applications using full ray-tracing functionality. NVIDIA also provides extensive Vulkan code samples, tutorials and best practice guidance so developers can get the very best performance from their code.
  5. NVIDIA makes Vulkan available across a wider range of platforms and hardware than anyone else for easier cross-platform portability. NVIDIA ships Vulkan on PCs, embedded platforms, automotive and the data center. And gamers enjoy ongoing support of the latest Vulkan API changes with older GPUs.
  6. NVIDIA aims to bulletproof your games with highly reliable game-ready drivers. NVIDIA treats Vulkan as a first-class citizen API with focused development and support. In fact, developers can download our zero-day Vulkan 1.3 drivers right now at https://developer.nvidia.com/vulkan-driver.

Look for more details about our commitment and leadership in Vulkan on NVIDIA’s Vulkan web page. And if you’re not already a member of NVIDIA’s Developer Program, sign up. Developers can download new tools and drivers from NVIDIA for Vulkan 1.3 today. 

The post Vulkan Fan? Six Reasons to Run It on NVIDIA appeared first on The Official NVIDIA Blog.

Read More