Startup Taps Finance Micromodels for Data Annotation Automation

After meeting at an entrepreneur matchmaking event, Ulrik Hansen and Eric Landau teamed up to parlay their experience in financial trading systems into a platform for faster data labeling.

In 2020, the pair of finance industry veterans founded Encord to adapt micromodels typical in finance to automated data annotation. Micromodels are neural networks that require less time to deploy because they’re trained on less data and used for specific tasks.

Encord’s NVIDIA GPU-driven service promises to automate as much as 99 percent of businesses’ manual data labeling with its micromodels.

“Instead of building one big model that does everything, we’re just combining a lot of smaller models together, and that’s very similar to how a lot of these trading systems work,” said Landau.

The startup, based in London, recently landed $12.5 million in Series A funding.

Encord is an NVIDIA Metropolis partner and a member of NVIDIA Inception, a program that offers go-to-market support, expertise and technology for AI, data science and HPC startups. NVIDIA Metropolis is an application framework that makes it easier for developers to combine video cameras and sensors with AI-enabled video analytics.

The company said it has attracted business in gastrointestinal endoscopy, radiology, thermal imaging, smart cities, agriculture, autonomous transportation and retail applications.

‘Augmenting Doctors’ for SurgEase

Back in 2021, the partners hunkered down near Laguna Beach, Calif., at the home of Landau’s parents, to build Encord while attending Y Combinator. And they had also just landed a first customer, SurgEase.

London-based SurgEase offers telepresence technology for gastroenterology. The company’s hardware device and software enable remote physicians to monitor high-definition images and video captured in colonoscopies.

“You could have a doctor in an emerging economy do the diagnostics or  detection, as well as a doctor from one of the very best hospitals in the U.S.,” said Hansen.

To improve diagnostics, SurgEase is also applying video data to training AI models for detection. Encord’s micromodels are being applied to annotate the video data that’s used for SurgEase’s models.The idea is to give doctors a second set of eyes on procedures.

“Encord’s software has been instrumental in aiding us in solving some of the hardest problems in endoscopic disease assessment,” said SurgEase CEO Fareed Iqbal.

With AI-aided diagnostics, clinicians using SurgEase might spot more things sooner so that people don’t need more severe procedures down the line, said Hansen. Doctors also don’t always agree, so it can help cut through the noise with another opinion, said Landau.

“It’s really augmenting doctors,” said Landau.

King’s College of London: 6x Faster 

King’s College of London had a challenge of finding a way to annotate images in precancerous polyp videos. So it turned to Encord for annotation automation because using highly skilled clinicians was costly on such large datasets.  

The result was that the micro models could be used to annotate about 6.4x faster than manual labeling. It was capable of handling about 97 percent of the datasets with automated annotation, the rest requiring manual labeling from clinicians.

Encord enabled King’s College of London to cut model development time from one year to two months, moving AI into production faster.

Triton: Quickly Into Inference

Encord was initially setting out to build its own inference engine, running on its API server. But Hansen and Landau decided using NVIDIA Triton would save a lot of engineering time and get them quickly into production.

Triton offers open-source software for taking AI into production by simplifying how models run in any framework and on any GPU or CPU for all inference types.

Also, it allowed them to focus on their early customers by not having to build inference engine architecture themselves.

People using Encord’s platform can train a micromodel and run inference very soon after that, enabled by Triton, Hansen said.

“With Triton, we get the native support for all these machine learning libraries like PyTorch and it’s compatible with CUDA,” said  Hansen. “It saved us a lot of time and hassles.”

The post Startup Taps Finance Micromodels for Data Annotation Automation appeared first on The Official NVIDIA Blog.

Read More

Burgers, Fries and a Side of AI: Startup Offers Taste of Drive-Thru Convenience

Eating into open hours and menus, a labor shortage has gobbled up fast-food services employees, but some restaurants are trying out a new staff member to bring back the drive-thru good times: AI.

Toronto startup HuEx is in pilot tests with a conversational AI assistant for drive-thrus to help support service at several popular Canadian chains.

Chronically understaffed, food services jobs have among the highest rate of employee departures, according to the U.S. Bureau of Labor Statistics.

HuEx’s voice service — dubbed AiDA — is helping behind the drive-up window at popular fast-service chains across North America.

AiDA handles order requests from customers at the drive-thru speaker box. Driven by HuEx’s proprietary models running on the NVIDIA Jetson edge AI platform, AiDA transcribes the voice orders to text for staff members to see and serve. And it can reply with voice in response.

It can understand 300,000-plus product combinations. “Things like ‘coffee with milk, coffee with sugar’ are common, but some people even order coffee with butter — it can handle that, too,” said Anik Seth, founder and CEO of HuEx.

The company is a member of NVIDIA Inception, a program that offers go-to-market support, expertise and technology for AI, data science and HPC startups.

All in the Family

Seth is intimately familiar with fast-service restaurants. He is part of a family business operating multiple quick-service restaurant locations.

Noticing a common problem, he has seen team members and guests struggling during drive-thru interactions, something he aims to address.

“AiDA’s voice recognition technology is easily handled by the NVIDIA Jetson for real-time interactions, which helps smooth the ordering process,” he said.

Talk AI to Me

The technology, integrated with the existing drive-thru headset system, allows for team members to hear the orders and jump in if needed to assist.

AiDA, first deployed in 2018, has been used in “thousands of transactions” in implementations in Canada, said Seth.

The system promises to help improve service time by taking on the drive-thru while other team members focus on fulfilling orders. Its natural language processing system is capable of 90 percent accuracy when taking orders, he said.

As new menu items, specials and promotions are introduced, the database is updated constantly to answer questions about them.

“The team is always in the know,” Seth said. “The moment you order a coffee, the AI is taking the order, while simultaneously, there’s a team member fulfilling it.”

Image credit: Robert Penaloza via Unsplash.

The post Burgers, Fries and a Side of AI: Startup Offers Taste of Drive-Thru Convenience appeared first on The Official NVIDIA Blog.

Read More

Meet the Omnivore: Developer Sleighs Complex Manufacturing Workflows With Digital Twin of Santa’s Workshop

Editor’s note: This is one in a series of Meet the Omnivore posts, featuring individual creators and developers who use the NVIDIA Omniverse 3D simulation and collaboration platform to boost their artistic or engineering processes.

Don’t be fooled by the candy canes, hot cocoa and CEO’s jolly demeanor.

Santa’s workshop is the very model of a 21st-century enterprise: pioneering mass customization and perfecting a worldwide distribution system able to meet almost bottomless global demand.

Michael Wagner

So it makes sense that Michael Wagner, CTO of ipolog, a digital twin software company for assembly and logistics planning, would make a virtual representation, or digital twin, of Santa’s workshop.

Digital twins like Wagner’s “santa-factory” can be used “to map optimal employee paths around a facility, simulate processes like material flow, as well as detect bottlenecks before they occur,” he said.

Wagner built an NVIDIA Omniverse Extension — a tool to use in conjunction with Omniverse apps — for what he calls the science of santa-facturing.

A rendering of the assembly room in Santa’s workshop, created with NVIDIA Omniverse.
A rendering of the assembly room in Santa’s workshop, created with NVIDIA Omniverse.

Creating the ‘Santa-Facturing’ Extension

To deck the halls of the santa-factory, Wagner needed a virtual environment where he could depict the North Pole, Santa himself, hundreds of elves and millions of toy parts. Omniverse provided the tools to create such a highly detailed environment.

“Omniverse is the only platform that’s able to visualize such a vast amount of components in high fidelity and make the simulation physically accurate,” Wagner said. “My work is a proof of concept — if Omniverse is fit to visualize Santa’s factory, it’s fit to visualize the daily material provisioning load for a real-world automotive factory, for example, which has a similar order of complexity.”

Ipolog recently provided BMW with highly detailed elements like racks and boxes for a digital twin of the automaker’s factory.

With the help of ipolog software and other tools, BMW is creating a digital twin-based factory of the future with NVIDIA Omniverse, which enables the automaker to simulate complex production scenarios taking place in more than 6 million square meters of factory space.

Digital twin simulation speeds output and increases efficiency for BMW’s entire production cycle — from the examination of engineering detail for vehicle parts to the optimization of workflow at the factory-plant level.

Wagner used Omniverse Kit, a toolkit for building Omniverse-native extensions and applications, to create the santa-facturing environment.

The developer is also exploring Omniverse Code — a recently launched app that serves as an integrated development environment for developers to easily build Omniverse extensions, apps or microservices.

“The principle of building on the shoulders of giants is in the DNA of the Omniverse ecosystem and the kit-based environment,” Wagner said. “Existing open-source extensions, which any developer can contribute to, provide a good base from which to start off and quickly create a dedicated app or extension for digital twins.”

Visualizing the ‘Santa-Factory’

Using Omniverse, which includes PhysX — a software development kit that provides advanced physics simulation — Wagner transformed 2D illustrations of the santa-factory into a physically accurate 3D scene. The process was simple, he said. He “piled up a lot of elements and let PhysX work its magic.”

A 2D representation of Santa’s workshop turned into a 3D rendering using Omniverse.

To create the glacial North Pole environment, Wagner used the Unreal Engine 4 Omniverse Connector. To bring the trusty elves to life, he brought in animations from Blender. And to convert the huge datasets to Universal Scene Description format, Wagner worked with Germany-based 3D software development company NetAllied Systems.

A rendering of elves tending to reindeer near Santa’s workshop, created with NVIDIA Omniverse.

What better example of material supply and flow in manufacturing than millions of toy parts getting delivered to Santa’s workshop? Watch Wagner’s stunning demo of this, created in Omniverse:

Such use of digital twin simulations, Wagner said, allows manufacturers to visualize and plan their most efficient workflow, often reducing the time it takes to complete a manufacturing project by 30 percent.

Looking forward, Wagner and his team at ipolog plan to create a full suite of apps, extensions and backend services to enable a manufacturing virtual world entirely based on Omniverse.

Learn more about the santa-facturing project and how Wagner uses Omniverse Kit.

Attend Wagner’s session on digital twins for manufacturing at GTC, which will take place March 21-24.

Creators and developers can download NVIDIA Omniverse for free and get started with step-by-step tutorials on the Omniverse YouTube channel. Follow Omniverse on Instagram, Twitter and Medium for additional resources and inspiration. Check out the Omniverse forums and join our Discord Server to chat with the community.

The post Meet the Omnivore: Developer Sleighs Complex Manufacturing Workflows With Digital Twin of Santa’s Workshop appeared first on The Official NVIDIA Blog.

Read More

2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary

Great things come in twos. Techland’s Dying Light 2 Stay Human arrives with RTX ON and is streaming from the cloud tomorrow, Feb. 4.

Plus, in celebration of the second anniversary of GeForce NOW, February is packed full of membership rewards in Eternal Return, World of Warships and more. There are also 30 games joining the GeForce NOW library this month, with four streaming this week.

Take a Bite Out of Dying Light 2

It’s the end of the world. Get ready for Techland’s Dying Light 2 Stay Human, releasing on Feb. 4 on GeForce NOW with RTX ON.

Civilization has fallen to the virus, and The City, one of the last large human sanctuaries, is torn by conflict in a world ravaged by the infected dead. You are a wanderer with the power to determine the fate of The City in your search to learn the truth.

Struggle to survive by using clever thinking, daring parkour, resourceful traps and creative weapons to make helpful allies and overcome enemies with brutal combat. Discover the secrets of a world thrown into darkness and make decisions that determine your destiny. There’s one thing you can never forget — stay human.

We’ve teamed up with Techland to bring real-time ray tracing to Dying Light 2 Stay Human, delivering the highest level of realistic lighting and depth into every scene of the new game.

Dying Light 2 RTX ON on GeForce NOW
Hold on to your humanity and charge into ‘Dying Light 2’ with RTX ON.

Dying Light 2 Stay Human features a dynamic day-night cycle that is enhanced with the power of ray tracing, including fully ray-traced lighting throughout. Ray-traced global illumination, reflections and shadows bring the world to life with significantly better diffused lighting and improved shadows, as well as more accurate and crisp reflections.

“We’ve been working with NVIDIA to expand the world of Dying Light 2 with ray-tracing technology so that players can experience our newest game with unprecedented image quality and more immersion than ever,” said Tomasz Szałkowski, rendering director at Techland. “Now, gamers can play Dying Light 2 Stay Human streaming on GeForce NOW to enjoy our game in the best way possible and exactly as intended with the RTX 3080 membership, even when playing on underpowered devices.”

Enter the dark ages and stream Dying Light 2 Stay Human from both Steam and Epic Games Store on GeForce NOW at launch tomorrow.

An Anniversary Full of Rewards

Happy anniversary, members. We’re celebrating GeForce NOW turning two with three rewards for members — including in-game content for Eternal Return, World of Warships.

GeForce NOW World of Warship Reward
All aboard! We’re setting sail towards great rewards this month for titles like World of Warships.

Check in on GFN Thursdays throughout February for updates on the upcoming rewards, and make sure you’re opted in by checking the box for Rewards in the GeForce NOW account portal.

Celebrating the Second Anniversary of GeForce NOW
Thanks for two years of great gaming on the cloud!

Thanks to the cloud, this past year, over 15 million members around the world have streamed nearly 180 million hours of their favorite games like Apex Legends, Rust and more from the ever-growing GeForce NOW library.

Members have played with the power of the new six-month RTX 3080 memberships, delivering cinematic graphics with RTX ON in supported games like Cyberpunk 2077 and Control. They’ve also experienced gaming with ultra-low latency and maximized eight-hour session lengths across their devices.

It’s All Fun and Games in February

The party doesn’t stop there. It’s also the first GFN Thursday of the month, which means a whole month packed full of games.

New games in February on GeForce NOW
There’s a whole lot of new games coming to the cloud in February.

Gear up for the 30 new titles coming to the cloud in February, with four games ready to stream this week:

  • Life is Strange Remastered (New release on Steam, Feb. 1)
  • Life is Strange: Before the Storm Remastered (New release on Steam, Feb. 1)
  • Dying Light 2 Stay Human (New release on Steam and Epic Games Store, Feb. 4)
  • Warm Snow (Steam)

Also coming in February:

  • Werewolf: The Apocalypse – Earthblood (New release on Steam Feb. 7)
  • Sifu (New release on Epic Games Store, Feb. 8)
  • Diplomacy is Not an Option (New release on Steam, Feb. 9)
  • SpellMaster: The Saga (New release on Steam, Feb. 16)
  • Destiny 2: The Witch Queen Deluxe Edition (New release on Steam, Feb. 22)
  • SCP: Pandemic (New release on Steam, Feb. 22)
  • Martha is Dead (New release on Steam and Epic Games Store, Feb. 24)
  • Ashes of the Singularity: Escalation (Steam)
  • AWAY: The Survival Series (Epic Games Store)
  • Citadel: Forged With Fire (Steam)
  • Escape Simulator (Steam)
  • Galactic Civilizations III (Steam)
  • Haven (Steam)
  • Labyrinthine Dreams (Steam)
  • March of Empires (Steam)
  • Modern Combat 5 (Steam)
  • Parkasaurus (Steam)
  • People Playground (Steam)
  • Police Simulator: Patrol Officers (Steam)
  • Sins of a Solar Empire: Rebellion (Steam)
  • Train Valley 2 (Steam)
  • TROUBLESHOOTER: Abandoned Children (Steam)
  • Truberbrook (Steam and Epic Games Store)
  • Two Worlds Epic Edition (Steam)
  • Valley (Steam)
  • The Vanishing of Ethan Carter (Epic Games Store)

We make every effort to launch games on GeForce NOW as close to their release as possible, but, in some instances, games may not be available immediately.

Extra Games From January

As the cherry on top of the games announced in January, an extra nine made it to the cloud. Don’t miss any of these titles that snuck their way onto GeForce NOW last month:

With another anniversary for the cloud and all of these updates, there’s never too much fun. Share your favorite GeForce NOW memories from the past year and talk to us on Twitter.

The post 2 Powerful 2 Be Stopped: ‘Dying Light 2 Stay Human’ Arrives on GeForce NOW’s Second Anniversary appeared first on The Official NVIDIA Blog.

Read More

Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief

Flooding usually comes with various bad weather conditions, such as thick clouds, heavy rain and blustering winds.

GPU-powered data science systems can now help researchers and emergency flood response teams to see through it all.

John Murray, visiting professor in the Geographic Data Science Lab at the University of Liverpool, developed cuSAR, a platform that can monitor ground conditions using radar data from the European Space Agency.

cuSAR uses the satellite data to create images that portray accurate geographic information about what’s happening on the ground, below the bad weather conditions.

To create the radar vision platform, Murray used the NVIDIA RAPIDS suite of software libraries and the CUDA parallel computing platform, as well as NVIDIA GPUs.

Emergency Flood Response

The platform was originally designed for the property insurance sector, as mortgage and insurance providers need to assess risks that affect properties, including flooding.

Using satellite data in this way requires clear visuals of the ground, but obtaining analyzable images meant potentially waiting weeks for breaks in Britain’s infamous cloud cover. With cuSAR, users can gain insights in near real time.

Use cases for the radar vision platform have now expanded to the safety sector.

The North Wales Regional Emergency Planning Service first contacted the Geographic Data Science Lab for help with serious flooding that occurred in the Dee Valley a couple years ago. Low, dense clouds hung over the valley, making it impossible for the team to fly helicopters. And drones weren’t able to give a sufficient overview of how the floodplains along the river were behaving.

Using the NVIDIA GPU-powered image analysis platform, Murray was able to provide high-quality renders of the affected areas each day of the flooding. The emergency planning service used this information to allocate its limited resources to critical areas, adjusting its efforts as the flooding progressed.

Last year, the lab provided radar data to monitor a vaccine factory under threat from rising water levels. Emergency response teams were able to send helicopters, which weather conditions allowed this time, to the exact locations from which to best combat the flooding.

Correcting a Distorted View

Creating analyzable images from radar data is no simple task.

Due to the earth’s curvature, the perspective of satellite images is distorted. This distortion needs to be mathematically corrected and overlaid with location data, using a process called rubbersheeting, for precise geolocation.

A typical radar manifest contains half a billion data points, presented as a grid.

The distortion of a satellite image.
An example of the distortion from a radar image in comparison to the location it corresponds to. Courtesy of Fusion Data Science Ltd CC BY-SA 3.0.

“You can’t just take radar data and make an image from it,” said Murray. “There’s a lot of processing and math involved, and that’s where the GPUs come in.”

Murray wrote the code for cuSAR using NVIDIA RAPIDS and Python Numba CUDA, which matches the radar and location data seamlessly.

Traditional Java or Python code would usually take around 40 minutes to provide an output. Backed by an NVIDIA GPU, it takes only four seconds.

Once the data has been processed, the platform outputs an image with accurate geographic information that corresponds to Ordnance Survey grid coordinates.

Within 15 minutes of receiving the satellite data, it can be placed in the hands of the emergency relief teams, giving them the knowledge to effectively react to a rapidly evolving situation on the ground.

Flood Protection for the Future

In the last decade, the U.K. has seen several of its wettest months on record. Notably, 2020 was the first year on record that fell in the top 10 for all three key weather rankings — warmest, wettest and sunniest. The Met Office predicts that severe flash flooding could be nearly five times more likely in 50 years time.

Technology like cuSAR enables researchers and emergency responders to monitor and react to disasters in a timely manner, protecting homes and businesses that are most vulnerable to worsening weather conditions.

Learn more about technology breakthroughs at GTC, running March 21-24.  

Feature image courtesy of Copernicus Sentinel data, processed by ESA CC BY-SA 3.0.

The post Rain or Shine: Radar Vision Sees Through Clouds to Support Emergency Flood Relief appeared first on The Official NVIDIA Blog.

Read More

How Audio Analytic Is Teaching Machines to Listen

From active noise cancellation to digital assistants that are always listening for your commands, audio is perhaps one of the most important but often overlooked aspects of modern technology in our daily lives.

Audio Analytic has been using machine learning that enables a vast array of devices to make sense of the world of sound.

We spoke with Dr. Chris Mitchell, CEO and founder of Audio Analytic about the challenges, and the fun, involved in teaching machines to listen.

Subscribe to the AI Podcast: Now Available on Amazon Music

You can now listen to the AI Podcast through Amazon Music.

You can also get the AI Podcast through iTunes, Google Podcasts, Google Play, Castbox, DoggCatcher, Overcast, PlayerFM, Pocket Casts, Podbay, PodBean, PodCruncher, PodKicker, Soundcloud, Spotify, Stitcher and TuneIn.

If your favorite isn’t listed here, drop us a note.

You Might Also Like:

Art(ificial) Intelligence: Pindar Van Arman Builds Robots That Paint

Pindar Van Arman, an American artist and roboticist, designs painting robots that explore the differences between human and computational creativity. Since his first system in 2005, he has built multiple artificially creative robots. The most famous, Cloud Painter, was awarded first place at Robotart 2018.

Real or Not Real? Attorney Steven Frank Uses Deep Learning to Authenticate Art

Steven Frank is a partner at the law firm Morgan Lewis, specializing in intellectual property and commercial technology law. He’s also half of the husband-wife team that used convolutional neural networks to authenticate artistic masterpieces, including Da Vinci’s Salvador Mundi, with AI’s help.

Researchers Chris Downum and Leszek Pawlowicz Use Deep Learning to Accelerate Archaeology

Researchers in the Department of Anthropology at Northern Arizona University are using GPU-based deep learning algorithms to categorize sherds — tiny fragments of ancient pottery.

Make the AI Podcast Better

Have a few minutes to spare? Fill out this listener survey. Your answers will help us make a better podcast.

The post How Audio Analytic Is Teaching Machines to Listen appeared first on The Official NVIDIA Blog.

Read More

Support for New NVIDIA RTX 3080 Ti, 3070 Ti Studio Laptops Now Available in February Studio Driver

Support for the new GeForce RTX 3080 Ti and 3070 Ti Laptop GPUs is available today in the February Studio driver.

Updated monthly, NVIDIA Studio drivers support NVIDIA tools and optimize the most popular creative apps, delivering added performance, reliability and speed to creative workflows.

Creatives will also benefit from the February Studio driver with enhancements to their existing creative apps as well as the latest app releases, including a major update to Maxon’s Redshift renderer.

The NVIDIA Studio platform is being rapidly adopted by aspiring artists, freelancers and creative professionals who seek to take their projects to the next level. The next generation of Studio laptops further powers their ambitions.

Creativity Unleashed — 3080 Ti and 3070 Ti Studio Laptops

Downloading the February Studio driver will help unlock massive time savings, especially for 3080 Ti and 3070 Ti GPU owners, in essential creative apps.

Blender renders are exceptionally fast on GeForce RTX 3080 Ti and 3070 Ti GPUs with RT Cores powering hardware-accelerated ray tracing.

GeForce RTX 3080 Ti GPU laptops achieve up to 10x faster rendering speeds than the MacBook Pro 16 M1 Max.

Autodesk aficionados with a GeForce RTX 3080 Ti GPU equipped laptop can render the heaviest of scenes much faster, in this example saving over an hour.

The GeForce RTX 3080 Ti laptop GPU renders up to 7x faster in Autodesk Maya than the MacBook Pro 16 M1 Max.

Video production specialists in REDCINE-X PRO have the freedom to edit in real-time with elevated FPS, resulting in more accurate playback, requiring far less time in the editing bay.

Edit RED RAW video faster with GeForce RTX 3080 Ti laptop GPU.

Creators can move at the speed of light with the 2022 lineup of Studio laptops and desktops.

MSI has announced the Creator Z16 and Creator Z17 Studio laptops, set for launch in March, with up to GeForce RTX 3080 Ti Laptop GPUs.

The MSI Z17 True Pixel display features QHD+ resolution, 100 percent DCI-P3 (typical) color gamut, factory-calibrated Delta-E < 2 out-of-the-box accuracy and True Color Technology.

ASUS’s award-winning ZenBook Pro Duo, coming later this year, sports a GeForce RTX 3060 GPU, plus a 15.6-inch 4K UHD OLED touchscreen and secondary 4K screen, unlocking numerous creative possibilities.

ASUS worked closely with third-party developers — including professional video-editing software developer Corel, with more to come — to optimize ScreenPad Plus for creative workflows and productivity.

The Razer Blade 17 and 15, available now, come fully loaded with a GeForce RTX 3080 Ti GPU and 32GB of memory — and they’re configurable with a beautiful 4K 144hz, 100-percent DCI-P3 display. Razer Blade 14 will launch on Feb. 17.

The Razer Blade 17 features a stunning 4K display and a 144Hz UHD refresh rate for creative professionals who want their visions to truly come to life.

GIGABYTE’s newly refreshed AERO 16 and 17 Studio laptops, equipped with GeForce RTX 3070 Ti and 3080 Ti GPUS, are also now available.

The AERO 17 sports a 3mm ultra-thin bezel and X-Rite Pantone-certified 4K HDR display with Adobe RGB 100 percent color gamut.

These creative machines power RTX-accelerated tools, including NVIDIA Omniverse, Canvas and Broadcast, making next-generation AI technology even more accessible while reducing and removing tedious work.

Fourth-generation Max-Q technologies — including CPU Optimizer and Rapid Core Scaling — maximize creative performance in remarkably thin laptop designs.

Stay tuned for more Studio product announcements in the coming months.

Shift to Redshift RT 

Well-known to 3D artists, Maxon’s Redshift renderer is powerful, biased and GPU accelerated — built to exceed the demands of contemporary high-end production rendering.

Redshift recently launched Redshift RT — a real-time rendering feature — in beta, allowing 3D artists to omit unnecessary wait times for renders to finalize.

Redshift RT runs exclusively on NVIDIA RTX GPUs, bolstered by RT Cores, powering hardware-accelerated, interactive ray tracing.

Redshift RT, which is part of the current release, enables a more natural, intuitive way of working. It offers increased freedom to try different options for creating spectacular content, and is best used for scene editing and rendering previews. Redshift Production remains the highest possible quality and control renderer.

Redshift RT technology is integrated in the Maxon suite of creative apps including Cinema 4D, and is available for Autodesk 3ds Max and Maya, Blender, Foundry Katana and SideFX Houdini, as well as architectural products Vectorworks, Archicad and Allplan, dramatically speeding up all types of visual workflows.

With so many options, now’s the time to take a leap into 3D. Check out our Studio YouTube channel for standouts, tutorials, tips and tricks from industry-leading artists on how to get started.

Get inspired by the latest NVIDIA Studio Standouts video featuring some of our favorite digital art from across the globe.

Follow NVIDIA Studio on Facebook, Twitter and Instagram for the latest information on creative app updates, new Studio apps, creator contests and more. Get updates directly to your inbox by subscribing to the Studio newsletter.

The post Support for New NVIDIA RTX 3080 Ti, 3070 Ti Studio Laptops Now Available in February Studio Driver appeared first on The Official NVIDIA Blog.

Read More

Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality

Interior renovations have never looked this good.

TCImage, a studio based in Taipei, is showcasing compelling landscape and architecture designs by creating realistic 3D graphics and presenting them in virtual, augmented, and mixed reality — collectively known as extended reality, or XR.

For clients to get a better understanding of the designs, TCImage produces high-quality, 3D visualizations of the projects and puts them in a virtual environment. This lets users easily review and engage with the model in full scale, so they can get to the final design faster.

To keep up with client expectations and deliver quality content, the team at TCImage needs advanced tools and technologies that help them make design concepts feel like a reality.

With NVIDIA RTX technology, CloudXR, Deep Learning Super Sampling (DLSS) and NVIDIA Omniverse, TCImage is at the forefront of delivering stunning renders and XR experiences that allow clients to be virtually transported to the renovation of their dreams.

Bringing Design Details to Life With RTX

To make the realistic details stand out in a design, its CEO Leo Chou and his team must create all 3D visuals in high resolution. During the design process, the team uses popular applications like Autodesk 3ds Max, Autodesk Revit, Trimble SketchUp and Unreal Engine 4. Chou initially tried using a consumer-level PC to render 3D graphics, but it would take up to three hours just to render a single frame of a 4K image.

Now, with an enterprise-grade PC powered by an NVIDIA RTX 6000 graphics card, he can render the same 4K frame within 30 minutes. NVIDIA RTX provides Chou with enhanced efficiency and performance, which allow him to achieve real-time rendering of final images.

“I was thrilled by the performance of RTX technology — it’s more powerful, allowing me to establish a competitive edge in the industry by making real-time ray tracing come true,” said Chou.

Looking Around Unbound With CloudXR

To show off these dazzling 3D visuals to customers, TCImage uses CloudXR.

With this extended reality streaming technology, Chou and his team can share projects inside an immersive and seamless experience, allowing them to efficiently communicate project designs to customers. The team can also present their designs from any location, as they can stream the untethered XR experiences from the cloud.

Built on RTX technology, CloudXR enables TCImage to stream high-resolution, real-time graphics and provide a more interactive experience for clients. NVIDIA DLSS also improves the XR experience by rendering more frames per second, which is especially helpful during the design review process.

With NVIDIA DLSS, TCImage can tap into the power of AI to boost frame rates and create sharp images for the XR environment. This helps the designers and clients see a preview of the 3D model with minimal latency as the user moves and rotates inside the environment.

“By using NVIDIA CloudXR, I can freely and easily present my projects, artwork and portfolio to customers anytime, anywhere while maintaining the best quality of content,” said Chou. “I can even edit the content in real time, based on the customers’ requirements.”

According to Chou, TCImage clients who have experienced the improved workflow were impressed by how much time and cost savings the new technology has provided. It’s also created more business opportunities for the firm.

Designing Buildings in Virtual Worlds

TCImage has started to explore design workflows in the virtual world with NVIDIA Omniverse, a platform for 3D simulation and design collaboration. In addition to using real-time ray tracing and DLSS in Omniverse, Chou played around with optimizing his virtual scenes with the Omniverse Create and Omniverse View applications.

“Omniverse is flexible enough to integrate with major graphics software, as well as allow instantaneous content updates and changes without any extra effort by our team,” said Chou.

In Omniverse Create, Chou can enhance creative workflows by connecting to leading applications to produce architectural designs. He also uses existing materials in Omniverse, such as grass brush samples, to create exterior landscapes and vegetation.

And with Omniverse View, Chou uses lighting tools such as Sun Study, which allows him to review designs with accurate sunlight.

Learn more about TCImage and check out Chou’s recent tutorial in Omniverse:

The post Renovations to Stream About: Taiwan Studio Showcases Architectural Designs Using Extended Reality appeared first on The Official NVIDIA Blog.

Read More

Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways

Preventable train accidents like the 1985 disaster outside Tel Aviv in which a train collided with a school bus, killing 19 students and several adults, motivated Shahar Hania and Elen Katz to help save lives with technology.

They founded Rail Vision, an Israeli startup that creates obstacle-detection and classification systems for the global railway industry

The systems use advanced electro-optic sensors to alert train drivers and railway control centers when a train approaches potential obstacles — like humans, vehicles, animals or other objects — in real time, and in all weather and lighting conditions.

Rail Vision is a member of NVIDIA Inception — a program designed to nurture cutting-edge startups — and an NVIDIA Metropolis partner. The company uses the NVIDIA Jetson AGX Xavier edge AI platform, which provides GPU-accelerated computing in a compact and energy-efficient module, and the NVIDIA TensorRT software development kit for high-performance deep learning inference.

Pulling the Brakes in Real Time

A train’s braking distance — or the distance a train travels between when its brakes are pulled and when it comes to a complete stop — is usually so long that by the time a driver spots a railway obstacle, it could be too late to do anything about it.

For example, the braking distance for a train traveling 100 miles per hour is 800 meters, or about a half-mile, according to Hania. Rail Vision systems can detect objects on and along tracks from up to two kilometers, or 1.25 miles, away.

By sending alerts, both visual and acoustic, of potential obstacles in real time, Rail Vision systems give drivers over 20 seconds to respond and make decisions on braking.

The systems can also be integrated with a train’s infrastructure to automatically apply brakes when an obstacle is detected, even without a driver’s cue.

“Tons of deep learning inference possibilities are made possible with NVIDIA GPU technology,” Hania said. “The main advantage of using the NVIDIA Jetson platform is that there are lots of goodies inside — compressors, modules for optical flow — that all speed up the embedding process and make our systems more accurate.”

Boosting Maintenance, in Addition to Safety

In addition to preventing accidents, Rail Vision systems help save operational time and costs spent on railway maintenance — which can be as high as $50 billion annually, according to Hania.

If a railroad accident occurs, four to eight hours are typically spent handling the situation — which prevents other trains from using the track, said Hania.

Rail Vision systems use AI to monitor the tracks and prevent such workflow slow-downs, or quickly alert operators when they do occur — giving them time to find alternate routes or plans of action.

The systems are scalable and deployable for different use cases — with some focused solely on these maintenance aspects of railway operations.

Watch a Rail Vision system at work.

The post Train Spotting: Startup Gets on Track With AI and NVIDIA Jetson to Ensure Safety, Cost Savings for Railways appeared first on The Official NVIDIA Blog.

Read More

How Smart Hospital Technology Can Help Cut Down on Medical Errors

Despite the feats of modern medicine, as many as 250,000 Americans die from medical errors each year — more than 6 times the number killed in car accidents.

Smart hospital AI can help avoid some of these fatalities in healthcare, just as computer vision-based driver assistance systems can improve road safety, according to AI leader Fei-Fei Li.

Whether through surgical instrument omission, a wrong drug prescription or a patient safety issue when clinicians aren’t present, “there’s just all kinds of errors that could be introduced, unintended, despite protocols that have been put together to avoid them,” said Li, computer science professor and co-director of the Stanford Institute for Human-Centered Artificial Intelligence, in a talk at the recent NVIDIA GTC. “Humans are still humans.”

By endowing healthcare spaces with smart sensors and machine learning algorithms, Li said, clinicians can help cut down medical errors and provide better patient care.

“We have to make sense of what we sense” with sensor data, said Li. “This brings in machine learning and deep learning algorithms that can turn sensed data into medical insights that are really important to keep our patients safe.”

To hear from other experts in deep learning and medicine, register free for the next GTC, running online March 21-24. GTC features talks from dozens of healthcare researchers and innovators harnessing AI for smart hospitals, drug discovery, genomics and more.

Sensor Solutions Bring Ambient Intelligence to Clinicians

Li’s interest in AI for healthcare delivery was sparked a decade ago when she was caring for a sick parent.

“The more I spent my time in ICUs and hospital rooms and even at home caring for my family, the more I saw the analogy between self-driving technology and healthcare delivery,” she said.

Her vision of sensor-driven “ambient intelligence,” outlined in a Nature paper, covers both the hospital and the home. It offers insights in operating rooms as well as the daily living spaces of individuals with chronic disease.

For example, ICU patients need a certain amount of movement to help their recovery. To ensure that patients are getting the right amount of mobility, researchers are developing smart sensor systems to automatically tag patient movements and understand their mobility levels while in critical care.

Another project used depth sensors and convolutional neural networks to assess whether clinicians were properly using hand sanitizer when entering and exiting patient rooms.

Outside of the hospital, as the global population continues to age, wearable sensors can help ensure seniors are aging healthily by monitoring mobility, sleep and medicine compliance.

The next challenge, Li said, is advancing computer vision to classify more complex human movement.

“We’re not content with these coarse activities like walking and sleeping,” she said. “What’s more important clinically are fine-grained activities.”

Protecting Patient, Caregiver Privacy 

When designing smart hospital technology, Li said, it’s important that developers prioritize privacy and security of patients, clinicians and caretakers.

“From a computer vision point of view, blurring and masking has become more and more important when it comes to human signals,” she said. “These are really important ways to mitigate private information and personal identity from being inadvertently leaked.”

In the field of data privacy, Li said, federated learning is another promising solution to protect confidential information.

Throughout the process of developing AI for healthcare, she said, developers must take a multi-stakeholder approach, involving patients, clinicians, bioethicists and government agencies in a collaborative environment.

“At the end of the day, healthcare is about humans caring for humans,” said Li. “This technology should not replace our caretakers, replace our families or replace our nurses and doctors. It’s here to augment and enhance humanity and give more dignity back to our patients.”

Watch the full talk on NVIDIA On-Demand, and sign up for GTC to learn about the latest in AI and healthcare.

The post How Smart Hospital Technology Can Help Cut Down on Medical Errors appeared first on The Official NVIDIA Blog.

Read More