Here are Google’s latest AI updates from February 2025Read More
Survey Shows How AI Is Reshaping Healthcare and Life Sciences, From Lab to Bedside
From research and discovery to patient care and administrative tasks, AI is showing transformative potential across nearly every part of healthcare and life sciences.
For example, generative AI can be used to help automate repetitive, time-consuming tasks such as summarizing and creating documents and extracting and analyzing data from reports. It can also aid in drug discovery by finding new protein structures and offer assistance to patients through chatbots and AI assistants, easing the burden on clinical and administrative staff.
This wide range of applications was among key insights of NVIDIA’s inaugural “State of AI in Healthcare and Life Sciences” survey.
The survey — which polled more than 600 professionals across the globe from fields spanning digital healthcare, medical tools and technologies, pharmaceutical and biotech, and payers and practitioners — revealed robust AI adoption in the industry, with about two-thirds of respondents saying their companies are actively using the technology.
AI is also having a tangible impact on the industry’s bottom line, with 81% of respondents saying AI has helped increase revenue, and 45% percent realizing these benefits in less than a year after implementation.
Here are some of the key insights and use cases from the survey:
- 83% of overall respondents agreed with the statement that “AI will revolutionize healthcare and life sciences in the next three to five years”
- 73% said AI is helping to reduce operational costs
- 58% cited data analytics as the top AI workload, with generative AI second at 54%, and large language models third at 53%
- 59% of respondents from pharmaceutical and biotech companies cited drug discovery and development among their top AI use cases
Business Impact of AI in Healthcare and Life Sciences
The healthcare and life sciences industry is seeing how AI can help increase annual revenue and reduce operational costs. Forty-one percent of respondents indicated that the acceleration of research and development has had a positive impact. Thirty-six percent of respondents said AI has helped create a competitive advantage. And 35% have said it’s helped reduce project cycles, deliver better clinical or research insights, and enhance precision and accuracy, respectively.
Given the positive results across a broad range of AI use cases, it comes as no surprise that 78% of respondents said they intend to increase their budget for AI infrastructure this year. In addition, more than a third of respondents noted their investments in AI will increase by more than 10%.
The survey also revealed the top three spending priorities: identifying additional AI use cases (47%), optimizing workflow and production cycles (34%) and hiring more AI experts (26%).
AI Applied Across Healthcare
Each industry segment in the survey had differing priorities in AI implementation. For instance, in the payers and providers industry segment, which includes health insurance companies, hospitals, clinical services and home healthcare, 48% of respondents said their top AI use case was administrative tasks and workflow optimization.
For the medical tools and technologies field, 71% of respondents said their top AI use case was medical imaging and diagnostics, such as using AI to analyze MRI or CAT scans. And for digital healthcare, 54% of respondents said their top use case was clinical decision support, while 54% from the pharmaceutical and biotech fields prioritized drug discovery and development.
AI use cases expected to have the most significant impact in healthcare and life sciences in the next five years include advanced medical imaging and diagnostics (51%), virtual healthcare assistants (34%) and precision medicine — treatment tailored to individual patient characteristics — (29%).
A Growing Dose of Generative AI
Overall, 54% of survey respondents said they’re using generative AI. Of these users, 63% said they’re actively using it, with another 36% assessing the technology through pilots or trials.
Digital healthcare was the leader in generative AI use, according to 71% of respondents from the field. Second was pharmaceutical and biotech at 69%, then medical technologies at 60%, and payers and providers at 44%.
Among all generative AI use cases, coding and document summarization — specific to clinical notes — was the top use case, at 55%. Medical chatbots and AI agents were second, at 53%, and literature analysis was third, at 45%. One notable exception was within the pharmaceutical biotech industry segment, in which respondents stated that drug discovery was the top generative AI use case, at 62%.
Download the “State of AI in Healthcare and Life Sciences: 2025 Trends” report for in-depth results and insights.
Explore NVIDIA’s AI technologies and platforms for healthcare, and sign up for NVIDIA’s healthcare newsletter to stay up to date.
Reduce conversational AI response time through inference at the edge with AWS Local Zones
Recent advances in generative AI have led to the proliferation of new generation of conversational AI assistants powered by foundation models (FMs). These latency-sensitive applications enable real-time text and voice interactions, responding naturally to human conversations. Their applications span a variety of sectors, including customer service, healthcare, education, personal and business productivity, and many others.
Conversational AI assistants are typically deployed directly on users’ devices, such as smartphones, tablets, or desktop computers, enabling quick, local processing of voice or text input. However, the FM that powers the assistant’s natural language understanding and response generation is usually cloud-hosted, running on powerful GPUs. When a user interacts with the AI assistant, their device first processes the input locally, including speech-to-text (STT) conversion for voice agents, and compiles a prompt. This prompt is then securely transmitted to the cloud-based FM over the network. The FM analyzes the prompt and begins generating an appropriate response, streaming it back to the user’s device. The device further processes this response, including text-to-speech (TTS) conversion for voice agents, before presenting it to the user. This efficient workflow strikes a balance between the powerful capabilities of cloud-based FMs and the convenience and responsiveness of local device interaction, as illustrated in the following figure.
A critical challenge in developing such applications is reducing response latency to enable real-time, natural interactions. Response latency refers to the time between the user finishing their speech and beginning to hear the AI assistant’s response. This delay typically comprises two primary components:
- On-device processing latency – This encompasses the time required for local processing, including TTS and STT operations.
- Time to first token (TTFT) – This measures the interval between the device sending a prompt to the cloud and receiving the first token of the response. TTFT consists of two components. First is the network latency, which is the round-trip time for data transmission between the device and the cloud. Second is the first token generation time, which is the period between the FM receiving a complete prompt and generating the first output token. TTFT is crucial for user experience in conversational AI interfaces that use response streaming with FMs. With response streaming, users start receiving the response while it’s still being generated, significantly improving perceived latency.
The ideal response latency for humanlike conversation flow is generally considered to be in the 200–500 milliseconds (ms) range, closely mimicking natural pauses in human conversation. Given the additional on-device processing latency, achieving this target requires a TTFT well below 200 ms.
Although many customers focus on optimizing the technology stack behind the FM inference endpoint through techniques such as model optimization, hardware acceleration, and semantic caching to reduce the TTFT, they often overlook the significant impact of network latency. This latency can vary considerably due to geographic distance between users and cloud services, as well as the diverse quality of internet connectivity.
Hybrid architecture with AWS Local Zones
To minimize the impact of network latency on TTFT for users regardless of their locations, a hybrid architecture can be implemented by extending AWS services from commercial Regions to edge locations closer to end users. This approach involves deploying additional inference endpoints on AWS edge services and using Amazon Route 53 to implement dynamic routing policies, such as geolocation routing, geoproximity routing, or latency-based routing. These strategies dynamically distribute traffic between edge locations and commercial Regions, providing fast response times based on real-time network conditions and user locations.
AWS Local Zones are a type of edge infrastructure deployment that places select AWS services close to large population and industry centers. They enable applications requiring very low latency or local data processing using familiar APIs and tool sets. Each Local Zone is a logical extension of a corresponding parent AWS Region, which means customers can extend their Amazon Virtual Private Cloud (Amazon VPC) connections by creating a new subnet with a Local Zone assignment.
This guide demonstrates how to deploy an open source FM from Hugging Face on Amazon Elastic Compute Cloud (Amazon EC2) instances across three locations: a commercial AWS Region and two AWS Local Zones. Through comparative benchmarking tests, we illustrate how deploying FMs in Local Zones closer to end users can significantly reduce latency—a critical factor for real-time applications such as conversational AI assistants.
Prerequisites
To run this demo, complete the following prerequisites:
- Create an AWS account, if you don’t already have one.
- Enable the Local Zones in Los Angeles and Honolulu in the parent Region US West (Oregon). For a full list of available Local Zones, refer to the Local Zones locations page. Next, create a subnet inside each Local Zone. Detailed instructions for enabling Local Zones and creating subnets within them can be found at Getting started with AWS Local Zones.
- Submit an Amazon EC2 service quota increase for access to Amazon EC2 G4dn instances. Select the Running On-Demand G and VT instances as the quota type and at least 24 vCPUs for the quota size.
- Create a Hugging Face read token from huggingface.co/settings/tokens.
Solution walkthrough
This section walks you through the steps to launch an Amazon EC2 G4dn instance and deploy an FM for inference in the Los Angeles Local Zone. The instructions are also applicable for deployments in the parent Region, US West (Oregon), and the Honolulu Local Zone.
We use Meta’s open source Llama 3.2-3B as the FM for this demonstration. This is a lightweight FM from the Llama 3.2 family, classified as a small language model (SLM) due to its small number of parameters. Compared to large language models (LLMs), SLMs are more efficient and cost-effective to train and deploy, excel when fine-tuned for specific tasks, offer faster inference times, and have lower resource requirements. These characteristics make SLMs particularly well-suited for deployment on edge services such as AWS Local Zones.
To launch an EC2 instance in the Los Angeles Local Zone subnet, follow these steps:
- On the Amazon EC2 console dashboard, in the Launch instance box, choose Launch instance.
- Under Name and tags, enter a descriptive name for the instance (for example, la-local-zone-instance).
- Under Application and OS Images (Amazon Machine Image), select an AWS Deep Learning AMI that comes preconfigured with NVIDIA OSS driver and PyTorch. For our deployment, we used Deep Learning OSS Nvidia Driver AMI GPU PyTorch 2.3.1 (Amazon Linux 2).
- Under Instance type, from the Instance type list, select the hardware configuration for your instance that’s supported in a Local Zone. We selected
G4dn.2xlarge
for this solution. This instance is equipped with one NVIDIA T4 Tensor Core GPU and 16 GB of GPU memory, which makes it ideal for high performance and cost-effective inference of SLMs on the edge. Available instance types for each Local Zone can be found at AWS Local Zones features. Review the hardware requirements for your FM to select the appropriate instance. - Under Key pair (login), choose an existing key pair or create a new one.
- Next to Network settings, choose Edit, and then:
- Select your VPC.
- Select your Local Zone subnet.
- Create a security group or select an existing one. Configure the security group’s inbound rules to allow traffic only from your client’s IP address on port 8080.
- You can keep the default selections for the other configuration settings for your instance. To determine the storage types that are supported, refer to the Compute and storage section in AWS Local Zones features.
- Review the summary of your instance configuration in the Summary panel and, when you’re ready, choose Launch instance.
- A confirmation page lets you know that your instance is launching. Choose View all instances to close the confirmation page and return to the console.
Next, complete the following steps to deploy Llama 3.2-3B using the Hugging Face Text Generation Inference (TGI) as the model server:
- Connect by using Secure Shell (SSH) into the instance
- Start the docker service using the following command. This comes preinstalled with the AMI we selected.
- Run the following command to download and run the Docker image for TGI server as well as Llama 3.2-3B model. In our deployment, we used Docker image version 2.4.0, but results might vary based on your selected version. The full list of supported models by TGI can be found at Hugging Face Supported Models. For more details about the deployment and optimization of TGI, refer to this text-generation-inference GitHub page.
- After the TGI container is running, you can test your endpoint by running the following command from your local environment:
Pixtral-12B-2409 is now available on Amazon Bedrock Marketplace
Today, we are excited to announce that Pixtral 12B (pixtral-12b-2409), a state-of-the-art 12 billion parameter vision language model (VLM) from Mistral AI that excels in both text-only and multimodal tasks, is available for customers through Amazon Bedrock Marketplace. Amazon Bedrock Marketplace is a new capability in Amazon Bedrock that enables developers to discover, test, and use over 100 popular, emerging, and specialized foundation models (FMs) alongside the current selection of industry-leading models in Amazon Bedrock. You can also use this model with Amazon SageMaker JumpStart, a machine learning (ML) hub that provides access to algorithms and models that can be deployed with one click for running inference.
In this post, we walk through how to discover, deploy, and use the Pixtral 12B model for a variety of real-world vision use cases.
Overview of Pixtral 12B
Pixtral 12B, Mistral’s inaugural VLM, delivers robust performance across a range of benchmarks, surpassing other open models and rivaling larger counterparts, according to Mistral’s evaluation. Designed for both image and document comprehension, Pixtral demonstrates advanced capabilities in vision-related tasks, including chart and figure interpretation, document question answering, multimodal reasoning, and instruction following—several of which are illustrated with examples later in this post. The model processes images at their native resolution and aspect ratio, providing high-fidelity input handling. Unlike many open source alternatives, Pixtral 12B achieves strong results in text-based benchmarks—such as instruction following, coding, and mathematical reasoning—without sacrificing its proficiency in multimodal tasks.
Mistral developed a novel architecture for Pixtral 12B, optimized for both computational efficiency and performance. The model consists of two main components: a 400-million-parameter vision encoder, responsible for tokenizing images, and a 12-billion-parameter multimodal transformer decoder, which predicts the next text token based on a sequence of text and images. The vision encoder was specifically trained to natively handle variable image sizes, enabling Pixtral to accurately interpret high-resolution diagrams, charts, and documents while maintaining fast inference speeds for smaller images such as icons, clipart, and equations. This architecture supports processing an arbitrary number of images of varying sizes within a large context window of 128k tokens.
License agreements are a critical decision factor when using open-weights models. Similar to other Mistral models, such as Mistral 7B, Mixtral 8x7B, Mixtral 8x22B, and Mistral Nemo 12B, Pixtral 12B is released under the commercially permissive Apache 2.0, providing enterprise and startup customers with a high-performing VLM option to build complex multimodal applications.
Performance metrics and benchmarks
Pixtral 12B is trained to understand both natural images and documents, achieving 52.5% on the Massive Multitask Language Understanding (MMLU) reasoning benchmark, surpassing a number of larger models according to Mistral. The MMLU benchmark is a test that evaluates a language model’s ability to understand and use language across a variety of subjects. The MMLU consists of over 10,000 multiple-choice questions spanning a variety of academic subjects, including mathematics, philosophy, law, and medicine. The model shows strong abilities in tasks such as chart and figure understanding, document question answering, multimodal reasoning, and instruction following. Pixtral is able to ingest images at their natural resolution and aspect ratio, giving the user flexibility on the number of tokens used to process an image. Pixtral is also able to process multiple images in its long context window of 128,000 tokens. Unlike previous open source models, Pixtral doesn’t compromise on text benchmark performance to excel in multimodal tasks, according to Mistral.
You can review the Mistral published benchmarks
Prerequisites
To try out Pixtral 12B in Amazon Bedrock Marketplace, you will need the following prerequisites:
- An AWS account that will contain all your AWS resources.
- An AWS Identity and Access Management (IAM) role to access Amazon Bedrock Marketplace and Amazon SageMaker endpoints. To learn more about how IAM works with Amazon Bedrock Marketplace, refer to Set up Amazon Bedrock Marketplace.
- Access to accelerated instances (GPUs) for hosting the model, such as ml.g6.12xlarge. Refer to Requesting a quota increase for access to GPU instances.
Deploy Pixtral 12B in Amazon Bedrock Marketplace
On the Amazon Bedrock console, you can search for models that help you with a specific use case or language. The results of the search include both serverless models and models available in Amazon Bedrock Marketplace. You can filter results by provider, modality (such as text, image, or audio), or task (such as classification or text summarization).
To access Pixtral 12B in Amazon Bedrock Marketplace, follow these steps:
- On the Amazon Bedrock console, choose Model catalog under Foundation models in the navigation pane.
- Filter for Hugging Face as a provider and choose the Pixtral 12B model, or search for Pixtral in the Filter for a model input box.
The model detail page provides essential information about the model’s capabilities, pricing structure, and implementation guidelines. You can find detailed usage instructions, including sample API calls and code snippets for integration.
The page also includes deployment options and licensing information to help you get started with Pixtral 12B in your applications.
- To begin using Pixtral 12B, choose Deploy.
You will be prompted to configure the deployment details for Pixtral 12B. The model ID will be prepopulated.
- Read carefully and accept the End User License Agreement (EULA).
- The Endpoint Name is automatically populated. Customers can choose to rename the endpoint.
- For Number of instances, enter a number of instances (between 1–100).
- For Instance type, choose your instance type. For optimal performance with Pixtral 12B, a GPU-based instance type like ml.g6.12xlarge is recommended.
Optionally, you can configure advanced security and infrastructure settings, including virtual private cloud (VPC) networking, service role permissions, and encryption settings. For most use cases, the default settings will work well. However, for production deployments, you might want to review these settings to align with your organization’s security and compliance requirements.
- Choose Deploy to begin using the model.
When the deployment is complete, Endpoint status should change to In Service. After the endpoint is in service, you can test Pixtral 12B capabilities directly in the Amazon Bedrock playground.
- Choose Open in playground to access an interactive interface where you can experiment with different prompts and adjust model parameters like temperature and maximum length.
This is an excellent way to explore the model’s reasoning and text generation abilities before integrating it into your applications. The playground provides immediate feedback, helping you understand how the model responds to various inputs and letting you fine-tune your prompts for optimal results.
You can quickly test the model in the playground through the UI. However, to invoke the deployed model programmatically with Amazon Bedrock APIs, you need to use the endpoint ARN as model-id
in the Amazon Bedrock SDK.
Pixtral 12B use cases
In this section, we provide example use cases of Pixtral 12B using sample prompts. We have defined helper functions to invoke the Pixtral 12B model using Amazon Bedrock Converse APIs:
Visual logical reasoning
One of the interesting use cases of vision models is solving logical reasoning problems or visual puzzles. Pixtral 12B vision models are highly capable in solving logical reasoning questions. Let’s explore an example.
We use the following input image.
Our prompt and input payload are as follows:
We get following response:
Structured product information
Extracting product information is crucial for the retail industry, especially on sites that host third-party sellers, where product images are the most accessible resource. Accurately capturing relevant details from these images is vital for a product’s success in ecommerce. For instance, using advanced visual models like Pixtral 12B, retailers can efficiently extract key attributes from clothing product images, such as color, style, and patterns. This capability not only streamlines inventory management but also enhances customer experiences by providing essential information that aids in informed purchasing decisions.
We use the following input image.
Our prompt and input payload are as follows:
We get the following response:
Vehicle damage assessment
In the insurance industry, image analysis plays a crucial role in claims processing. For vehicle damage assessment, vision models like Pixtral 12B can be used to compare images taken at policy issuance with those submitted during a claim. This approach can streamline the evaluation process, potentially reducing loss adjustment expenses and expediting claim resolution. By automating the identification and characterization of automobile damage, insurers can enhance efficiency, improve accuracy, and ultimately provide a better experience for policyholders.
We use the following input images.
Our prompt and input payload are as follows:
We get the following response:
Handwriting recognition
Another feature in vision language models is their ability to recognize handwriting and extract handwritten text. Pixtral 12B performs well on extracting content from complex and poorly handwritten notes.
We use the following input image.
Our prompt and input payload are as follows:
We get the following response:
Reasoning of complex figures
VLMs excel at interpreting and reasoning about complex figures, charts, and diagrams. In this particular use case, we use Pixtral 12B to analyze an intricate image containing GDP data. Pixtral 12B’s advanced capabilities in document understanding and complex figure analysis make it well-suited for extracting insights from visual representations of economic data. By processing both the visual elements and accompanying text, Pixtral 12B can provide detailed interpretations and reasoned analysis of the GDP figures presented in the image.
We use the following input image.
Our prompt and input payload are as follows:
We get the following response:
Clean up
To avoid unwanted charges, clean up your resources. If you deployed the model using Amazon Bedrock Marketplace, complete the following steps:
Delete the Amazon Bedrock Marketplace deployment
- On the Amazon Bedrock console, under Foundation models in the navigation pane, choose Marketplace deployments.
- In the Managed deployments section, locate the endpoint you want to delete.
- Verify the endpoint details to make sure you’re deleting the correct deployment:
- Endpoint name
- Model name
- Endpoint status
- Select the endpoint, and choose Delete.
- Choose Delete to delete the endpoint.
- In the deletion confirmation dialog, review the warning message, enter
confirm
, and choose Delete to permanently remove the endpoint.
Conclusion
In this post, we showed you how to get started with the Pixtral 12B model in Amazon Bedrock and deploy the model for inference. The Pixtral 12B vision model enables you to solve multiple use cases, including document understanding, logical reasoning, handwriting recognition, image comparison, entity extraction, extraction of structured data from scanned images, and caption generation. These capabilities can drive productivity in a number of enterprise use cases, including ecommerce (retail), marketing, FSI, and much more.
For more Mistral resources on AWS, check out the GitHub repo. The complete code for the samples featured in this post is available on GitHub. Pixtral 12B is also available in Amazon SageMaker JumpStart; refer to Pixtral 12B is now available on Amazon SageMaker JumpStart for details.
About the Authors
Deepesh Dhapola is a Senior Solutions Architect at AWS India, where he assists financial services and fintech clients in scaling and optimizing their applications on the AWS platform. He specializes in core machine learning and generative AI. Outside of work, Deepesh enjoys spending time with his family and experimenting with various cuisines.
Preston Tuggle is a Sr. Specialist Solutions Architect working on generative AI.
Shane Rai is a Principal GenAI Specialist with the AWS World Wide Specialist Organization (WWSO). He works with customers across industries to solve their most pressing and innovative business needs using AWS’s breadth of cloud-based AI/ML services including model offerings from top tier foundation model providers.
John Liu has 14 years of experience as a product executive and 10 years of experience as a portfolio manager. At AWS, John is a Principal Product Manager for Amazon Bedrock. Previously, he was the Head of Product for AWS Web3 / Blockchain. Prior to AWS, John held various product leadership roles at public blockchain protocols and fintech companies, and also spent 9 years as a portfolio manager at various hedge funds.
3 new ways we’re working to protect and restore nature using AI
Learn more about Google for Startups Accelerator: AI for Nature and Climate, as well as other new efforts to use technology to preserve our environment.Read More
Animals Crossing: AI Helps Protect Wildlife Across the Globe
From Seattle, Washington, to Cape Town, South Africa — and everywhere around and between — AI is helping conserve the wild plants and animals that make up the intricate web of life on Earth.
It’s critical work that sustains ecosystems and supports biodiversity at a time when the United Nations estimates over 1 million species are threatened with extinction.
World Wildlife Day, a UN initiative, is celebrated every March 3 to recognize the unique contributions wild animals and plants have on people and the planet — and vice versa.
“Our own survival depends on wildlife,” the above video on this year’s celebration says, “just as much as their survival depends on us.”
Learn more about some of the leading nonprofits and startups using NVIDIA AI and accelerated computing to protect wildlife and natural habitats, today and every day:
Ai2’s EarthRanger Offers World’s Largest Elephant Database
Seattle-based nonprofit AI research institute Ai2 offers EarthRanger, a software platform that helps protected-area managers, ecologists and wildlife biologists make more informed operational decisions for wildlife conservation in real time, whether preventing poaching, spotting ill or injured animals, or studying animal behavior.
Among Ai2’s efforts with EarthRanger is the planned development of a machine learning model — trained using NVIDIA Hopper GPUs in the cloud — that predicts the movement of elephants in areas close to human-wildlife boundaries where elephants could raid crops and potentially prompt humans to retaliate.
With access to the world’s largest repository of elephant movement data, made possible by EarthRanger users who’ve shared their data, the AI model could help predict elephant behaviors, then alert area managers to safely guide the elephants away from risky situations that could arise for them or for people in the vicinity. Area managers or rangers typically use helicopters, other vehicles and chili bombs to safely reroute elephants.

Beyond elephants, EarthRanger collects, integrates and displays data on a slew of wildlife — aggregated from over 100 data sources, including camera traps, acoustic sensors, satellites, radios and more. Then, the platform combines the data with field reports to provide a unified view of collared wildlife, rangers, enforcement assets and infrastructure within a protected area.

“Name a country, species or an environmental cause and we’re probably supporting a field organization’s conservation efforts there,” said Jes Lefcourt, director of EarthRanger at Ai2.
It’s deployed by governments and conservation organizations in 76 countries and 650 protected areas, including nearly every national park in Africa, about a dozen state fishing and wildlife departments in the U.S., as well as many other users across Latin America and Asia.
Four of these partners — Rouxcel Technology, OroraTech, Wildlife Protection Services and Conservation X Labs — are highlighted below.
Rouxcel Technology Saves Rhinos With AI
South African startup Rouxcel Technology’s AI-based RhinoWatches, tapping into EarthRanger, learn endangered black and white rhinos’ behaviors, then alert authorities in real time of any detected abnormalities. These abnormalities can include straying from typical habitats, territorial fighting with other animals and other potentially life-threatening situations.
It’s critical work, as there are just about 28,000 rhinos left in the world, from 500,000 at the beginning of the 20th century.

Rouxcel, based in Cape Town, has deployed over 1,200 RhinoWatches — trained and optimized using NVIDIA accelerated computing — across more than 40 South African reserves. The startup, which uses the Ai2 EarthRanger platform, protects more than 1.2 million acres of rhino habitats, and has recently expanded to help conservation efforts in Kenya and Namibia.
Looking forward, Rouxcel is developing AI models to help prevent poaching and human-wildlife conflict for more species, including pangolins, a critically endangered species.
OroraTech Monitors Wildfires and Poaching With NVIDIA CUDA, Jetson
OroraTech — a member of the NVIDIA Inception program for cutting-edge startups — uses the EarthRanger platform to protect wildlife in a different way, offering a wildfire detection and monitoring service that fuses satellite imagery and AI to safeguard the environment and prevent poaching.
Combining data from satellites, ground-based cameras, aerial observations and local weather information, OroraTech detects threats to natural habitats and alerts users in real time. The company’s technologies monitor more than 30 million hectares of land that directly impact wildlife in Africa and Australia. That’s nearly the size of the Great Barrier Reef.

OroraTech flies an NVIDIA Jetson module for edge AI and data processing onboard all of its satellite payloads — the instruments, equipment and systems on a satellite designed for performing specific tasks. Through GPU-accelerated image processing, OroraTech achieves exceptional latency, delivering fire notifications to users on the ground as fast as five minutes after image acquisition.
The AI-based fire-detection pipeline uses the NVIDIA cuDNN library of deep neural network primitives and the NVIDIA TensorRT software development kit for thermal anomaly detection and cloud masking in space, leading to high-precision fire detections.
Wildlife Protection Solutions Help Preserve Endangered Species
International nonprofit Wildlife Protection Solutions (WPS) supports more than 250 conservation projects in 50+ countries. Its remote cameras — about 3,000 deployed across the globe — using AI models provide real-time monitoring of animals and poachers, alerting rangers to intercede before wildlife is harmed.

WPS — which also taps into the EarthRanger platform — harnesses NVIDIA accelerated computing to optimize training and inference of its AI models, which process and analyze 65,000 photos per day.
The WPS tool is free and available on any mobile, tablet or desktop browser, enabling remote monitoring, early alerting and proactive, automated deterrence of wildlife or humans in sensitive areas.
Conservation X Labs Identifies Species From Crowdsourced Images
Seattle-based Conservation X Labs — which is on a mission to prevent the sixth mass extinction, or the dying out of a high percentage of the world’s biodiversity due to natural phenomena and human activity — also uses EarthRanger, including for its Wild Me solution: open-source AI software for the conservation research community.
Wild Me supports over 2,000 researchers across the globe running AI-enabled wildlife population studies for marine and terrestrial species.
In the below video, Wild Me helps researchers classify whale sharks using computer vision:
The crowdsourced database — which currently comprises 14 million photos — lets anyone upload imagery of species. Then, AI foundation models trained using NVIDIA accelerated computing help identify species to ease and accelerate animal population assessments and other research that supports the fight against species extinction.
In addition, Conservation X Labs’s Sentinel technology transforms traditional wildlife monitoring tools — like trail cameras and acoustic recorders — with AI, processing environmental data as it’s collected and providing conservationists with real-time, data-driven insights through satellite and cellular networks.
To date, Sentinel devices have delivered about 100,000 actionable insights for 80 different species. For example, see how the technology flags a limping panther, so wildlife protectors could rapidly step in to offer aid:
Learn more about how NVIDIA technologies bolster conservation and environmental initiatives at NVIDIA GTC, a global AI conference running March 17-21 in San Jose, California, including at sessions on how AI is supercharging Antarctic flora monitoring, enhancing a digital twin of the Great Barrier Reef and helping mitigate urban climate change.
Featured video courtesy of Conservation X Labs.
How healthcare organizations are using generative AI search and agents
Google Cloud and healthcare organizations share new partnerships at HIMSS 2025.Read More