Motion vectors — which are common in popular video formats — can be used to efficiently track regions of interest across multiple frames of video to generate motion-aware masks that improve video representation learning.Read More
Intelligently search Drupal content using Amazon Kendra
Amazon Kendra is an intelligent search service powered by machine learning (ML). Amazon Kendra helps you easily aggregate content from a variety of content repositories into a centralized index that lets you quickly search all your enterprise data and find the most accurate answer. Drupal is a content management software. It’s used to make many of the websites and applications we use every day. Drupal has a great feature set, like straightforward content authoring, reliable performance, and security. Many organizations use Drupal to store their content. One of the key requirements for many customers using Drupal is the ability to easily and securely find accurate information across all the documents in the data source.
With the Amazon Kendra Drupal connector, you can index Drupal content, filter the types of custom content you want to index, and easily search through Drupal content using Amazon Kendra intelligent search.
This post shows you how to use the Amazon Kendra Drupal connector to configure the connector as a data source for your Amazon Kendra index and search your Drupal documents. Based on the configuration of the Drupal connector, you can synchronize the connector to crawl and index different types of Drupal content such as blogs and wikis. The connector also ingests the access control list (ACL) information for each file. The ACL information is used for user context filtering, where search results for a query are filtered by what a user has authorized access to.
Prerequisites
To try out the Amazon Kendra connector for Drupal using this post as a reference, you need the following:
- An AWS account with privileges to create AWS Identity and Access Management (IAM) roles and policies. For more information, see Overview of access management: Permissions and policies and IAM roles for Drupal data sources.
- Basic knowledge of AWS and working knowledge of Drupal administration.
- Drupal set up with a user with the
Administrator
role. We will store the administrator user name and password in AWS Secrets Manager.
Configure the data source using the Amazon Kendra connector for Drupal
To add a data source to your Amazon Kendra index using the Drupal connector, you can use an existing index or create a new index. Then complete the following steps. For more information on this topic, refer to the Amazon Kendra Developer Guide.
- On the Amazon Kendra console, open your index and choose Data sources in the navigation pane.
- Choose Add data source.
- Under Drupal, choose Add connector.
- In the Specify data source details section, enter a name and description and choose Next.
- On the Define access and security section, for Drupal Host URL, enter the Drupal site URL.
- To configure the SSL certificates, you can create a self-signed certificate for this setup using the
openssl x509 -in mydrupalsite.pem -out drupal.crt
command and store the certificate in an Amazon Simple Storage Service (Amazon S3) bucket. For more details on generating a private key and the certificate, refer to Generating Certificates. - Choose Browse S3 and choose the S3 bucket with the SSL certificate.
- Under Authentication, you have two options:
- Use Secrets Manager to create new Drupal authentication credentials. You need a Drupal admin user name and password (additionally, a client ID and client secret for OAuth 2.0 authentication).
- Use an existing Secrets Manager secret that has the Drupal authentication credentials you want the connector to access (additionally, a client ID and client secret for OAuth 2.0 authentication).
- Choose Save and add secret.
- For IAM role, choose Create a new role or choose an existing IAM role configured with appropriate IAM policies to access the Secrets Manager secret, Amazon Kendra index, and data source.
Refer to IAM roles for data sources for the required permissions for the IAM role.
- Choose Next.
- In the Configure sync settings section, select Articles, Basic pages, Basic blocks, Custom content types, and Custom Blocks along with options to crawl comments and attachments as needed.
- Optionally, enter the include/exclude patterns for the entity titles.
- Provide information about your sync scope (full or delta only) and specify the run schedule.
- Choose Next.
- In the Set field mappings section, add custom Drupal fields you want to sync and their respective Amazon Kendra field mappings. The required fields are pre-mapped by Amazon Kendra.
- Choose Next.
- Review the configuration settings and save the data source.
- Choose Sync now on the created data source to start data synchronization with the Amazon Kendra Index.
The time required to crawl and sync the contents into Amazon Kendra varies based on the volume of content and the throughput.
You can now search the indexed Drupal content using the search console or a search application. Optionally, you can search with ACL with the following additional steps.
- Go to the index page that you created and on the User access control tab, choose Edit settings.
- Under Access control settings, select Yes, keep the default values for Username and Groups, choose JSON for Token type, and keep the user-group expansion as None.
- On the next page, retain the default values (or change them based on your capacity requirements) and choose Update.
Perform intelligent search with Amazon Kendra
Before you try searching on the Amazon Kendra console or using the API, make sure that the data source sync is complete. To check, view the data sources and verify if the last sync was successful.
- To start your search, on the Amazon Kendra console, choose Search indexed content in the navigation pane.
You’re redirected to the Amazon Kendra search console. Now you can search information from the Drupal documents you indexed using Amazon Kendra.
- For this post, we search for a document stored in the Drupal data source.
- Expand Test query with an access token and choose Apply token.
- For Username, enter the email address associated with your Drupal account.
- Choose Apply.
Now the user can only see the content they have access based on the user name or groups specified. In our example, the Drupal user with the test@amazon.com
email doesn’t have access to any documents on Drupal, so none are displayed.
Limitations
Note the following limitations when using this solution:
- The content types (such as article, or basic page) that aren’t associated with any view cannot be crawled.
- If an administrator doesn’t have access to a block, then you can’t crawl the data from the block.
- The document body for article, basic page, basic block, user-defined content type, and user-defined block type is displayed in HTML format. If the HTML content is not well-formed, then the HTML related tags will appear in the document body and therefore can be seen on the Amazon Kendra search results. This is the same with comments of article, basic page, basic block, user-defined content type, user-defined block type.
- The content type or block type without description or body will not be injected into the Amazon Kendra index because there is a validation on the Amazon Kendra SDK side. However, Drupal allows you to create the content type without description or body. Only the comments and attachments of the respective content types or block types (if they exist) will be injected into the Amazon Kendra index.
Clean up
To avoid incurring future costs, clean up the resources you created as part of this solution. If you created a new Amazon Kendra index while testing this solution, delete it. If you only added a new data source using the Amazon Kendra connector for Drupal, delete that data source. Delete any IAM users created.
Conclusion
With the Amazon Kendra Drupal connector, your organization can search contents stored in a Drupal site securely using intelligent search powered by Amazon Kendra. In this post, we introduced you to the integration, but there are many additional features that we didn’t cover, such as the following:
- You can map additional fields to Amazon Kendra index attributes and enable them for faceting, search, and display in the search results
- You can integrate the Drupal data source with the Custom Document Enrichment (CDE) capability in Amazon Kendra to perform additional attribute mapping logic and even custom content transformation during ingestion
To learn more about the possibilities with Drupal, refer to the Amazon Kendra Developer Guide.
For more information on other Amazon Kendra built-in connectors for popular data sources, refer to the Amazon Kendra Connectors page.
About the authors
Channa Basavaraja is a Senior Solutions Architect at AWS with over 2 decades of experience building distributed business solutions. His areas of depth span Machine Learning, app/mobile dev, event-driven architecture, and IoT/edge computing.
Yuanhua Wang is a software engineer at AWS with more than 15 years of experience in the technology industry. His interests are software architecture and build tools on cloud computing.
Intuitivo achieves higher throughput while saving on AI/ML costs using AWS Inferentia and PyTorch
This is a guest post by Jose Benitez, Founder and Director of AI and Mattias Ponchon, Head of Infrastructure at Intuitivo.
Intuitivo, a pioneer in retail innovation, is revolutionizing shopping with its cloud-based AI and machine learning (AI/ML) transactional processing system. This groundbreaking technology enables us to operate millions of autonomous points of purchase (A-POPs) concurrently, transforming the way customers shop. Our solution outpaces traditional vending machines and alternatives, offering an economical edge with its ten times cheaper cost, easy setup, and maintenance-free operation. Our innovative new A-POPs (or vending machines) deliver enhanced customer experiences at ten times lower cost because of the performance and cost advantages AWS Inferentia delivers. Inferentia has enabled us to run our You Only Look Once (YOLO) computer vision models five times faster than our previous solution and supports seamless, real-time shopping experiences for our customers. Additionally, Inferentia has also helped us reduce costs by 95 percent compared to our previous solution. In this post, we cover our use case, challenges, and a brief overview of our solution using Inferentia.
The changing retail landscape and need for A-POP
The retail landscape is evolving rapidly, and consumers expect the same easy-to-use and frictionless experiences they are used to when shopping digitally. To effectively bridge the gap between the digital and physical world, and to meet the changing needs and expectations of customers, a transformative approach is required. At Intuitivo, we believe that the future of retail lies in creating highly personalized, AI-powered, and computer vision-driven autonomous points of purchase (A-POP). This technological innovation brings products within arm’s reach of customers. Not only does it put customers’ favorite items at their fingertips, but it also offers them a seamless shopping experience, devoid of long lines or complex transaction processing systems. We’re excited to lead this exciting new era in retail.
With our cutting-edge technology, retailers can quickly and efficiently deploy thousands of A-POPs. Scaling has always been a daunting challenge for retailers, mainly due to the logistic and maintenance complexities associated with expanding traditional vending machines or other solutions. However, our camera-based solution, which eliminates the need for weight sensors, RFID, or other high-cost sensors, requires no maintenance and is significantly cheaper. This enables retailers to efficiently establish thousands of A-POPs, providing customers with an unmatched shopping experience while offering retailers a cost-effective and scalable solution.
Using cloud inference for real-time product identification
While designing a camera-based product recognition and payment system, we ran into a decision of whether this should be done on the edge or the cloud. After considering several architectures, we designed a system that uploads videos of the transactions to the cloud for processing.
Our end users start a transaction by scanning the A-POP’s QR code, which triggers the A-POP to unlock and then customers grab what they want and go. Preprocessed videos of these transactions are uploaded to the cloud. Our AI-powered transaction pipeline automatically processes these videos and charges the customer’s account accordingly.
The following diagram shows the architecture of our solution.
Unlocking high-performance and cost-effective inference using AWS Inferentia
As retailers look to scale operations, cost of A-POPs becomes a consideration. At the same time, providing a seamless real-time shopping experience for end-users is paramount. Our AI/ML research team focuses on identifying the best computer vision (CV) models for our system. We were now presented with the challenge of how to simultaneously optimize the AI/ML operations for performance and cost.
We deploy our models on Amazon EC2 Inf1 instances powered by Inferentia, Amazon’s first ML silicon designed to accelerate deep learning inference workloads. Inferentia has been shown to reduce inference costs significantly. We used the AWS Neuron SDK—a set of software tools used with Inferentia—to compile and optimize our models for deployment on EC2 Inf1 instances.
The code snippet that follows shows how to compile a YOLO model with Neuron. The code works seamlessly with PyTorch and functions such as torch.jit.trace()and neuron.trace()record the model’s operations on an example input during the forward pass to build a static IR graph.
We migrated our compute-heavy models to Inf1. By using AWS Inferentia, we achieved the throughput and performance to match our business needs. Adopting Inferentia-based Inf1 instances in the MLOps lifecycle was a key to achieving remarkable results:
- Performance improvement: Our large computer vision models now run five times faster, achieving over 120 frames per second (FPS), allowing for seamless, real-time shopping experiences for our customers. Furthermore, the ability to process at this frame rate not only enhances transaction speed, but also enables us to feed more information into our models. This increase in data input significantly improves the accuracy of product detection within our models, further boosting the overall efficacy of our shopping systems.
- Cost savings: We slashed inference costs. This significantly enhanced the architecture design supporting our A-POPs.
Data parallel inference was easy with AWS Neuron SDK
To improve performance of our inference workloads and extract maximum performance from Inferentia, we wanted to use all available NeuronCores in the Inferentia accelerator. Achieving this performance was easy with the built-in tools and APIs from the Neuron SDK. We used the torch.neuron.DataParallel()
API. We’re currently using inf1.2xlarge which has one Inferentia accelerator with four Neuron accelerators. So we’re using torch.neuron.DataParallel()
to fully use the Inferentia hardware and use all available NeuronCores. This Python function implements data parallelism at the module level on models created by the PyTorch Neuron API. Data parallelism is a form of parallelization across multiple devices or cores (NeuronCores for Inferentia), referred to as nodes. Each node contains the same model and parameters, but data is distributed across the different nodes. By distributing the data across multiple nodes, data parallelism reduces the total processing time of large batch size inputs compared to sequential processing. Data parallelism works best for models in latency-sensitive applications that have large batch size requirements.
Looking ahead: Accelerating retail transformation with foundation models and scalable deployment
As we venture into the future, the impact of foundation models on the retail industry cannot be overstated. Foundation models can make a significant difference in product labeling. The ability to quickly and accurately identify and categorize different products is crucial in a fast-paced retail environment. With modern transformer-based models, we can deploy a greater diversity of models to serve more of our AI/ML needs with higher accuracy, improving the experience for users and without having to waste time and money training models from scratch. By harnessing the power of foundation models, we can accelerate the process of labeling, enabling retailers to scale their A-POP solutions more rapidly and efficiently.
We have begun implementing Segment Anything Model (SAM), a vision transformer foundation model that can segment any object in any image (we will discuss this further in another blog post). SAM allows us to accelerate our labeling process with unparalleled speed. SAM is very efficient, able to process approximately 62 times more images than a human can manually create bounding boxes for in the same timeframe. SAM’s output is used to train a model that detects segmentation masks in transactions, opening up a window of opportunity for processing millions of images exponentially faster. This significantly reduces training time and cost for product planogram models.
Our product and AI/ML research teams are excited to be at the forefront of this transformation. The ongoing partnership with AWS and our use of Inferentia in our infrastructure will ensure that we can deploy these foundation models cost effectively. As early adopters, we’re working with the new AWS Inferentia 2-based instances. Inf2 instances are built for today’s generative AI and large language model (LLM) inference acceleration, delivering higher performance and lower costs. Inf2 will enable us to empower retailers to harness the benefits of AI-driven technologies without breaking the bank, ultimately making the retail landscape more innovative, efficient, and customer-centric.
As we continue to migrate more models to Inferentia and Inferentia2, including transformers-based foundational models, we are confident that our alliance with AWS will enable us to grow and innovate alongside our trusted cloud provider. Together, we will reshape the future of retail, making it smarter, faster, and more attuned to the ever-evolving needs of consumers.
Conclusion
In this technical traverse, we’ve highlighted our transformational journey using AWS Inferentia for its innovative AI/ML transactional processing system. This partnership has led to a five times increase in processing speed and a stunning 95 percent reduction in inference costs compared to our previous solution. It has changed the current approach of the retail industry by facilitating a real-time and seamless shopping experience.
If you’re interested in learning more about how Inferentia can help you save costs while optimizing performance for your inference applications, visit the Amazon EC2 Inf1 instances and Amazon EC2 Inf2 instances product pages. AWS provides various sample codes and getting started resources for Neuron SDK that you can find on the Neuron samples repository.
About the Authors
Matias Ponchon is the Head of Infrastructure at Intuitivo. He specializes in architecting secure and robust applications. With extensive experience in FinTech and Blockchain companies, coupled with his strategic mindset, helps him to design innovative solutions. He has a deep commitment to excellence, that’s why he consistently delivers resilient solutions that push the boundaries of what’s possible.
Jose Benitez is the Founder and Director of AI at Intuitivo, specializing in the development and implementation of computer vision applications. He leads a talented Machine Learning team, nurturing an environment of innovation, creativity, and cutting-edge technology. In 2022, Jose was recognized as an ‘Innovator Under 35’ by MIT Technology Review, a testament to his groundbreaking contributions to the field. This dedication extends beyond accolades and into every project he undertakes, showcasing a relentless commitment to excellence and innovation.
Diwakar Bansal is an AWS Senior Specialist focused on business development and go-to-market for Gen AI and Machine Learning accelerated computing services. Previously, Diwakar has led product definition, global business development, and marketing of technology products for IoT, Edge Computing, and Autonomous Driving focusing on bringing AI and Machine Learning to these domains.
Empower your business users to extract insights from company documents using Amazon SageMaker Canvas Generative AI
Enterprises seek to harness the potential of Machine Learning (ML) to solve complex problems and improve outcomes. Until recently, building and deploying ML models required deep levels of technical and coding skills, including tuning ML models and maintaining operational pipelines. Since its introduction in 2021, Amazon SageMaker Canvas has enabled business analysts to build, deploy, and use a variety of ML models – including tabular, computer vision, and natural language processing – without writing a line of code. This has accelerated the ability of enterprises to apply ML to use cases such as time-series forecasting, customer churn prediction, sentiment analysis, industrial defect detection, and many others.
As announced on October 5, 2023, SageMaker Canvas expanded its support of models to foundation models (FMs) – large language models used to generate and summarize content. With the October 12, 2023 release, SageMaker Canvas lets users ask questions and get responses that are grounded in their enterprise data. This ensures that results are context-specific, opening up additional use cases where no-code ML can be applied to solve business problems. For example, business teams can now formulate responses consistent with an organization’s specific vocabulary and tenets, and can more quickly query lengthy documents to get responses specific and grounded to the contents of those documents. All this content is performed in a private and secure manner, ensuring that all sensitive data is accessed with proper governance and safeguards.
To get started, a cloud administrator configures and populates Amazon Kendra indexes with enterprise data as data sources for SageMaker Canvas. Canvas users select the index where their documents are, and can ideate, research, and explore knowing that the output will always be backed by their sources-of-truth. SageMaker Canvas uses state-of-the-art FMs from Amazon Bedrock and Amazon SageMaker JumpStart. Conversations can be started with multiple FMs side-by-side, comparing the outputs and truly making generative-AI accessible to everyone.
In this post, we will review the recently released feature, discuss the architecture, and present a step-by-step guide to enable SageMaker Canvas to query documents from your knowledge base, as shown in the following screen capture.
Solution overview
Foundation models can produce hallucinations – responses that are generic, vague, unrelated, or factually incorrect. Retrieval Augmented Generation (RAG) is a frequently used approach to reduce hallucinations. RAG architectures are used to retrieve data from outside of an FM, which is then used to perform in-context learning to answer the user’s query. This ensures that the FM can use data from a trusted knowledge base and use that knowledge to answer users’ questions, reducing the risk of hallucination.
With RAG, the data external to the FM and used to augment user prompts can come from multiple disparate data sources, such as document repositories, databases, or APIs. The first step is to convert your documents and any user queries into a compatible format to perform relevancy semantic search. To make the formats compatible, a document collection, or knowledge library, and user-submitted queries are converted into numerical representations using embedding models.
With this release, RAG functionality is provided in a no-code and seamless manner. Enterprises can enrich the chat experience in Canvas with Amazon Kendra as the underlying knowledge management system. The following diagram illustrates the solution architecture.
Connecting SageMaker Canvas to Amazon Kendra requires a one-time set-up. We describe the set-up process in detail in Setting up Canvas to query documents. If you haven’t already set-up your SageMaker Domain, refer to Onboard to Amazon SageMaker Domain.
As part of the domain configuration, a cloud administrator can choose one or more Kendra indices that the business analyst can query when interacting with the FM through SageMaker Canvas.
After the Kendra indices are hydrated and configured, business analysts use them with SageMaker Canvas by starting a new chat and selecting “Query Documents” toggle. SageMaker Canvas will then manage the underlying communication between Amazon Kendra and the FM of choice to perform the following operations:
- Query the Kendra indices with the question coming from the user.
- Retrieve the snippets (and the sources) from Kendra indices.
- Engineer the prompt with the snippets with the original query so that the foundation model can generate an answer from the retrieved documents.
- Provide the generated answer to the user, along with references to the pages/documents that were used to formulate the response.
Setting up Canvas to query documents
In this section, we will walk you through the steps to set up Canvas to query documents served through Kendra indexes. You should have the following prerequisites:
- SageMaker Domain setup – Onboard to Amazon SageMaker Domain
- Create a Kendra index (or more than one)
- Setup the Kendra Amazon S3 connector – follow the Amazon S3 Connector – and upload PDF files and other documents to the Amazon S3 bucket associated with the Kendra index
- Setup IAM so that Canvas has the appropriate permissions, including those required for calling Amazon Bedrock and/or SageMaker endpoints – follow the Set-up Canvas Chat documentation
Now you can update the Domain so that it can access the desired indices. On the SageMaker console, for the given Domain, select Edit under the Domain Settings tab. Enable the toggle “Enable query documents with Amazon Kendra” which can be found at the Canvas Settings step. Once activated, choose one or more Kendra indices that you want to use with Canvas. Once activated, choose one or more Kendra indices that you want to use with Canvas.
That’s all that’s needed to configure Canvas query documents feature. Users can now jump into a chat within Canvas and start using the knowledge bases that have been attached to the Domain through the Kendra indexes. The maintainers of the knowledge-base can continue to update the source-of-truth and with the syncing capability in Kendra, the chat users will automatically be able to use the up-to-date information in a seamless manner.
Using the Query Documents feature for chat
As a SageMaker Canvas user, the Query Documents feature can be accessed from within a chat. To start the chat session, click or search for the “Generate, extract and summarize content” button from the Ready-to-use models tab in SageMaker Canvas.
Once there, you can turn on and off Query Documents with the toggle at the top of the screen. Check out the information prompt to learn more about the feature.
When Query Documents is enabled, you can choose among a list of Kendra indices enabled by the cloud administrator.
You can select an index when starting a new chat. You can then ask a question in the UX with knowledge being automatically sourced from the selected index. Note that after a conversation has started against a specific index, it is not possible to switch to another index.
For the questions asked, the chat will show the answer generated by the FM along with the source documents that contributed to generating the answer. When clicking any of the source documents, Canvas opens a preview of the document, highlighting the excerpt used by the FM.
Conclusion
Conversational AI has immense potential to transform customer and employee experience by providing a human-like assistant with natural and intuitive interactions such as:
- Performing research on a topic or search and browse the organization’s knowledge base
- Summarizing volumes of content to rapidly gather insights
- Searching for Entities, Sentiments, PII and other useful data, and increasing the business value of unstructured content
- Generating drafts for documents and business correspondence
- Creating knowledge articles from disparate internal sources (incidents, chat logs, wikis)
The innovative integration of chat interfaces, knowledge retrieval, and FMs enables enterprises to provide accurate, relevant responses to user questions by using their domain knowledge and sources-of-truth.
By connecting SageMaker Canvas to knowledge bases in Amazon Kendra, organizations can keep their proprietary data within their own environment while still benefiting from state-of-the-art natural language capabilities of FMs. With the launch of SageMaker Canvas’s Query Documents feature, we are making it easy for any enterprise to use LLMs and their enterprise knowledge as source-of-truth to power a secure chat experience. All this functionality is available in a no-code format, allowing businesses to avoid handling the repetitive and non-specialized tasks.
To learn more about SageMaker Canvas and how it helps make it easier for everyone to start with Machine Learning, check out the SageMaker Canvas announcement. Learn more about how SageMaker Canvas helps foster collaboration between data scientists and business analysts by reading the Build, Share & Deploy post. Finally, to learn how to create your own Retrieval Augmented Generation workflow, refer to SageMaker JumpStart RAG.
References
Lewis, P., Perez, E., Piktus, A., Petroni, F., Karpukhin, V., Goyal, N., Küttler, H., Lewis, M., Yih, W., Rocktäschel, T., Riedel, S., Kiela, D. (2020). Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. Advances in Neural Information Processing Systems, 33, 9459-9474.
About the Authors
Davide Gallitelli is a Senior Specialist Solutions Architect for AI/ML. He is based in Brussels and works closely with customers all around the globe that are looking to adopt Low-Code/No-Code Machine Learning technologies, and Generative AI. He has been a developer since he was very young, starting to code at the age of 7. He started learning AI/ML at university, and has fallen in love with it since then.
Bilal Alam is an Enterprise Solutions Architect at AWS with a focus on the Financial Services industry. On most days Bilal is helping customers with building, uplifting and securing their AWS environment to deploy their most critical workloads. He has extensive experience in Telco, networking, and software development. More recently, he has been looking into using AI/ML to solve business problems.
Pashmeen Mistry is a Senior Product Manager at AWS. Outside of work, Pashmeen enjoys adventurous hikes, photography, and spending time with his family.
Dan Sinnreich is a Senior Product Manager at AWS, helping to democratize low-code/no-code machine learning. Previous to AWS, Dan built and commercialized enterprise SaaS platforms and time-series models used by institutional investors to manage risk and construct optimal portfolios. Outside of work, he can be found playing hockey, scuba diving, and reading science fiction.
Detection and high-frequency monitoring of methane emission point sources using Amazon SageMaker geospatial capabilities
Methane (CH4) is a major anthropogenic greenhouse gas that‘s a by-product of oil and gas extraction, coal mining, large-scale animal farming, and waste disposal, among other sources. The global warming potential of CH4 is 86 times that of CO2 and the Intergovernmental Panel on Climate Change (IPCC) estimates that methane is responsible for 30 percent of observed global warming to date. Rapidly reducing leakage of CH4 into the atmosphere represents a critical component in the fight against climate change. In 2021, the U.N. introduced The Global Methane Pledge at the Climate Change Conference (COP26), with a goal to take “fast action on methane to keep a 1.5C future within reach.” The Pledge has 150 signatories including the U.S. and EU.
Early detection and ongoing monitoring of methane sources is a key component of meaningful action on methane and is therefore becoming a concern for policy makers and organizations alike. Implementing affordable, effective methane detection solutions at scale – such as on-site methane detectors or aircraft-mounted spectrometers – is challenging, as they are often impractical or prohibitively expensive. Remote sensing using satellites, on the other hand, can provide the global-scale, high-frequency, and cost-effective detection functionality that stakeholders desire.
In this blog post, we show you how you can use Sentinel 2 satellite imagery hosted on the AWS Registry of Open Data in combination with Amazon SageMaker geospatial capabilities to detect point sources of CH4 emissions and monitor them over time. Drawing on recent findings from the earth observation literature you will learn how you can implement a custom methane detection algorithm and use it to detect and monitor methane leakage from a variety of sites across the globe. This post includes accompanying code on GitHub that provides additional technical detail and helps you to get started with your own methane monitoring solution.
Traditionally, running complex geospatial analyses was a difficult, time-consuming, and resource-intensive undertaking. Amazon SageMaker geospatial capabilities make it easier for data scientists and machine learning engineers to build, train, and deploy models using geospatial data. Using SageMaker geospatial capabilities, you can efficiently transform or enrich large-scale geospatial datasets, accelerate model building with pre-trained machine learning (ML) models, and explore model predictions and geospatial data on an interactive map using 3D accelerated graphics and built-in visualization tools.
Remote sensing of methane point sources using multispectral satellite imagery
Satellite-based methane sensing approaches typically rely on the unique transmittance characteristics of CH4. In the visible spectrum, CH4 has transmittance values equal or close to 1, meaning it’s undetectable by the naked eye. Across certain wavelengths, however, methane does absorb light (transmittance <1), a property which can be exploited for detection purposes. For this, the short wavelength infrared (SWIR) spectrum (1500–2500 nm spectral range) is typically chosen, which is where CH4 is most detectable. Hyper- and multispectral satellite missions (that is, those with optical instruments that capture image data within multiple wavelength ranges (bands) across the electromagnetic spectrum) cover these SWIR ranges and therefore represent potential detection instruments. Figure 1 plots the transmittance characteristics of methane in the SWIR spectrum and the SWIR coverage of various candidate multispectral satellite instruments (adapted from this study).
Figure 1 – Transmittance characteristics of methane in the SWIR spectrum and coverage of Sentinel-2 multi-spectral missions
Many multispectral satellite missions are limited either by a low revisit frequency (for example, PRISMA Hyperspectral at approximately 16 days) or by low spatial resolution (for example, Sentinel 5 at 7.5 km x 7.5 km). The cost of accessing data is an additional challenge: some dedicated constellations operate as commercial missions, potentially making CH4 emission insights less readily available to researchers, decision makers, and other concerned parties due to financial constraints. ESA’s Sentinel-2 multispectral mission, which this solution is based on, strikes an appropriate balance between revisit rate (approximately 5 days), spatial resolution (approximately 20 m) and open access (hosted on the AWS Registry of Open Data).
Sentinel-2 has two bands that cover the SWIR spectrum (at a 20 m resolution): band-11 (1610 nm central wavelength) and band-12 (2190 nm central wavelength). Both bands are suitable for methane detection, while band-12 has significantly higher sensitivity to CH4 absorption (see Figure 1). Intuitively there are two possible approaches to using this SWIR reflectance data for methane detection. First, you could focus on just a single SWIR band (ideally the one that is most sensitive to CH4 absorption) and compute the pixel-by-pixel difference in reflectance across two different satellite passes. Alternatively, you use data from a single satellite pass for detection by using the two adjacent spectral SWIR bands that have similar surface and aerosol reflectance properties but have different methane absorption characteristics.
The detection method we implement in this blog post combines both approaches. We draw on recent findings from the earth observation literature and compute the fractional change in top-of-the-atmosphere (TOA) reflectance Δρ (that is, reflectance measured by Sentinel-2 including contributions from atmospheric aerosols and gases) between two satellite passes and the two SWIR bands; one baseline pass where no methane is present (base) and one monitoring pass where an active methane point source is suspected (monitor). Mathematically, this can be expressed as follows:
where ρ is the TOA reflectance as measured by Sentinel-2, cmonitor and cbase are computed by regressing TOA reflectance values of band-12 against those of band-11 across the entire scene (that is, ρb11 = c * ρb12). For more details, refer to this study on high-frequency monitoring of anomalous methane point sources with multispectral Sentinel-2 satellite observations.
Implement a methane detection algorithm with SageMaker geospatial capabilities
To implement the methane detection algorithm, we use the SageMaker geospatial notebook within Amazon SageMaker Studio. The geospatial notebook kernel is pre-equipped with essential geospatial libraries such as GDAL, GeoPandas, Shapely, xarray, and Rasterio, enabling direct visualization and processing of geospatial data within the Python notebook environment. See the getting started guide to learn how to start using SageMaker geospatial capabilities.
SageMaker provides a purpose-built API designed to facilitate the retrieval of satellite imagery through a consolidated interface using the SearchRasterDataCollection API call. SearchRasterDataCollection
relies on the following input parameters:
Arn
: The Amazon resource name (ARN) of the queried raster data collectionAreaOfInterest
: A polygon object (in GeoJSON format) representing the region of interest for the search queryTimeRangeFilter
: Defines the time range of interest, denoted as{StartTime: <string>,
EndTime: <string>}
PropertyFilters
: Supplementary property filters, such as specifications for maximum acceptable cloud cover, can also be incorporated
This method supports querying from various raster data sources which can be explored by calling ListRasterDataCollections. Our methane detection implementation uses Sentinel-2 satellite imagery, which can be globally referenced using the following ARN: arn:aws:sagemaker-geospatial:us-west-2:378778860802:raster-data-collection/public/nmqj48dcu3g7ayw8
.
This ARN represents Sentinel-2 imagery, which has been processed to Level 2A (surface reflectance, atmospherically corrected). For methane detection purposes, we will use top-of-atmosphere (TOA) reflectance data (Level 1C), which doesn’t include the surface level atmospheric corrections that would make changes in aerosol composition and density (that is, methane leaks) undetectable.
To identify potential emissions from a specific point source, we need two input parameters: the coordinates of the suspected point source and a designated timestamp for methane emission monitoring. Given that the SearchRasterDataCollection
API uses polygons or multi-polygons to define an area of interest (AOI), our approach involves expanding the point coordinates into a bounding box first and then using that polygon to query for Sentinel-2 imagery using SearchRasterDateCollection
.
In this example, we monitor a known methane leak originating from an oil field in Northern Africa. This is a standard validation case in the remote sensing literature and is referenced, for example, in this study. A fully executable code base is provided on the amazon-sagemaker-examples GitHub repository. Here, we highlight only selected code sections that represent the key building blocks for implementing a methane detection solution with SageMaker geospatial capabilities. See the repository for additional details.
We start by initializing the coordinates and target monitoring date for the example case.
#coordinates and date for North Africa oil field
#see here for reference: https://doi.org/10.5194/amt-14-2771-2021
point_longitude = 5.9053
point_latitude = 31.6585
target_date = '2019-11-20'
#size of bounding box in each direction around point
distance_offset_meters = 1500
The following code snippet generates a bounding box for the given point coordinates and then performs a search for the available Sentinel-2 imagery based on the bounding box and the specified monitoring date:
def bbox_around_point(lon, lat, distance_offset_meters):
#Equatorial radius (km) taken from https://nssdc.gsfc.nasa.gov/planetary/factsheet/earthfact.html
earth_radius_meters = 6378137
lat_offset = math.degrees(distance_offset_meters / earth_radius_meters)
lon_offset = math.degrees(distance_offset_meters / (earth_radius_meters * math.cos(math.radians(lat))))
return geometry.Polygon([
[lon - lon_offset, lat - lat_offset],
[lon - lon_offset, lat + lat_offset],
[lon + lon_offset, lat + lat_offset],
[lon + lon_offset, lat - lat_offset],
[lon - lon_offset, lat - lat_offset],
])
#generate bounding box and extract polygon coordinates
aoi_geometry = bbox_around_point(point_longitude, point_latitude, distance_offset_meters)
aoi_polygon_coordinates = geometry.mapping(aoi_geometry)['coordinates']
#set search parameters
search_params = {
"Arn": "arn:aws:sagemaker-geospatial:us-west-2:378778860802:raster-data-collection/public/nmqj48dcu3g7ayw8", # Sentinel-2 L2 data
"RasterDataCollectionQuery": {
"AreaOfInterest": {
"AreaOfInterestGeometry": {
"PolygonGeometry": {
"Coordinates": aoi_polygon_coordinates
}
}
},
"TimeRangeFilter": {
"StartTime": "{}T00:00:00Z".format(as_iso_date(target_date)),
"EndTime": "{}T23:59:59Z".format(as_iso_date(target_date))
}
},
}
#query raster data using SageMaker geospatial capabilities
sentinel2_items = geospatial_client.search_raster_data_collection(**search_params)
The response contains a list of matching Sentinel-2 items and their corresponding metadata. These include Cloud-Optimized GeoTIFFs (COG) for all Sentinel-2 bands, as well as thumbnail images for a quick preview of the visual bands of the image. Naturally, it’s also possible to access the full-resolution satellite image (RGB plot), shown in Figure 2 that follows.
Figure 2 – Satellite image (RGB plot) of AOI
As previously detailed, our detection approach relies on fractional changes in top-of-the-atmosphere (TOA) SWIR reflectance. For this to work, the identification of a good baseline is crucial. Finding a good baseline can quickly become a tedious process that involves plenty of trial and error. However, good heuristics can go a long way in automating this search process. A search heuristic that has worked well for cases investigated in the past is as follows: for the past day_offset=n
days, retrieve all satellite imagery, remove any clouds and clip the image to the AOI in scope. Then compute the average band-12 reflectance across the AOI. Return the Sentinel tile ID of the image with the highest average reflectance in band-12.
This logic is implemented in the following code excerpt. Its rationale relies on the fact that band-12 is highly sensitive to CH4 absorption (see Figure 1). A greater average reflectance value corresponds to a lower absorption from sources such as methane emissions and therefore provides a strong indication for an emission free baseline scene.
def approximate_best_reference_date(lon, lat, date_to_monitor, distance_offset=1500, cloud_mask=True, day_offset=30):
#initialize AOI and other parameters
aoi_geometry = bbox_around_point(lon, lat, distance_offset)
BAND_12_SWIR22 = "B12"
max_mean_swir = None
ref_s2_tile_id = None
ref_target_date = date_to_monitor
#loop over n=day_offset previous days
for day_delta in range(-1 * day_offset, 0):
date_time_obj = datetime.strptime(date_to_monitor, '%Y-%m-%d')
target_date = (date_time_obj + timedelta(days=day_delta)).strftime('%Y-%m-%d')
#get Sentinel-2 tiles for current date
s2_tiles_for_target_date = get_sentinel2_meta_data(target_date, aoi_geometry)
#loop over available tiles for current date
for s2_tile_meta in s2_tiles_for_target_date:
s2_tile_id_to_test = s2_tile_meta['Id']
#retrieve cloud-masked (optional) L1C band 12
target_band_data = get_s2l1c_band_data_xarray(s2_tile_id_to_test, BAND_12_SWIR22, clip_geometry=aoi_geometry, cloud_mask=cloud_mask)
#compute mean reflectance of SWIR band
mean_swir = target_band_data.sum() / target_band_data.count()
#ensure the visible/non-clouded area is adequately large
visible_area_ratio = target_band_data.count() / (target_band_data.shape[1] * target_band_data.shape[2])
if visible_area_ratio <= 0.7: #<-- ensure acceptable cloud cover
continue
#update maximum ref_s2_tile_id and ref_target_date if applicable
if max_mean_swir is None or mean_swir > max_mean_swir:
max_mean_swir = mean_swir
ref_s2_tile_id = s2_tile_id_to_test
ref_target_date = target_date
return (ref_s2_tile_id, ref_target_date)
Using this method allows us to approximate a suitable baseline date and corresponding Sentinel-2 tile ID. Sentinel-2 tile IDs carry information on the mission ID (Sentinel-2A/Sentinel-2B), the unique tile number (such as, 32SKA), and the date the image was taken among other information and uniquely identify an observation (that is, a scene). In our example, the approximation process suggests October 6, 2019 (Sentinel-2 tile: S2B_32SKA_20191006_0_L2A
), as the most suitable baseline candidate.
Next, we can compute the corrected fractional change in reflectance between the baseline date and the date we’d like to monitor. The correction factors c (see Equation 1 preceding) can be calculated with the following code:
def compute_correction_factor(tif_y, tif_x):
#get flattened arrays for regression
y = np.array(tif_y.values.flatten())
x = np.array(tif_x.values.flatten())
np.nan_to_num(y, copy=False)
np.nan_to_num(x, copy=False)
#fit linear model using least squares regression
x = x[:,np.newaxis] #reshape
c, _, _, _ = np.linalg.lstsq(x, y, rcond=None)
return c[0]
The full implementation of Equation 1 is given in the following code snippet:
def compute_corrected_fractional_reflectance_change(l1_b11_base, l1_b12_base, l1_b11_monitor, l1_b12_monitor):
#get correction factors
c_monitor = compute_correction_factor(tif_y=l1_b11_monitor, tif_x=l1_b12_monitor)
c_base = compute_correction_factor(tif_y=l1_b11_base, tif_x=l1_b12_base)
#get corrected fractional reflectance change
frac_change = ((c_monitor*l1_b12_monitor-l1_b11_monitor)/l1_b11_monitor)-((c_base*l1_b12_base-l1_b11_base)/l1_b11_base)
return frac_change
Finally, we can wrap the above methods into an end-to-end routine that identifies the AOI for a given longitude and latitude, monitoring date and baseline tile, acquires the required satellite imagery, and performs the fractional reflectance change computation.
def run_full_fractional_reflectance_change_routine(lon, lat, date_monitor, baseline_s2_tile_id, distance_offset=1500, cloud_mask=True):
#get bounding box
aoi_geometry = bbox_around_point(lon, lat, distance_offset)
#get S2 metadata
s2_meta_monitor = get_sentinel2_meta_data(date_monitor, aoi_geometry)
#get tile id
grid_id = baseline_s2_tile_id.split("_")[1]
s2_tile_id_monitor = list(filter(lambda x: f"_{grid_id}_" in x["Id"], s2_meta_monitor))[0]["Id"]
#retrieve band 11 and 12 of the Sentinel L1C product for the given S2 tiles
l1_swir16_b11_base = get_s2l1c_band_data_xarray(baseline_s2_tile_id, BAND_11_SWIR16, clip_geometry=aoi_geometry, cloud_mask=cloud_mask)
l1_swir22_b12_base = get_s2l1c_band_data_xarray(baseline_s2_tile_id, BAND_12_SWIR22, clip_geometry=aoi_geometry, cloud_mask=cloud_mask)
l1_swir16_b11_monitor = get_s2l1c_band_data_xarray(s2_tile_id_monitor, BAND_11_SWIR16, clip_geometry=aoi_geometry, cloud_mask=cloud_mask)
l1_swir22_b12_monitor = get_s2l1c_band_data_xarray(s2_tile_id_monitor, BAND_12_SWIR22, clip_geometry=aoi_geometry, cloud_mask=cloud_mask)
#compute corrected fractional reflectance change
frac_change = compute_corrected_fractional_reflectance_change(
l1_swir16_b11_base,
l1_swir22_b12_base,
l1_swir16_b11_monitor,
l1_swir22_b12_monitor
)
return frac_change
Running this method with the parameters we determined earlier yields the fractional change in SWIR TOA reflectance as an xarray.DataArray. We can perform a first visual inspection of the result by running a simple plot()
invocation on this data array. Our method reveals the presence of a methane plume at the center of the AOI that was undetectable in the RGB plot seen previously.
Figure 3 – Fractional reflectance change in TOA reflectance (SWIR spectrum)
As a final step, we extract the identified methane plume and overlay it on a raw RGB satellite image to provide the important geographic context. This is achieved by thresholding, which can be implemented as shown in the following:
def get_plume_mask(change_in_reflectance_tif, threshold_value):
cr_masked = change_in_reflectance_tif.copy()
#set values above threshold to nan
cr_masked[cr_masked > treshold_value] = np.nan
#apply mask on nan values
plume_tif = np.ma.array(cr_masked, mask=cr_masked==np.nan)
return plume_tif
For our case, a threshold of -0.02 fractional change in reflectance yields good results but this can change from scene to scene and you will have to calibrate this for your specific use case. Figure 4 that follows illustrates how the plume overlay is generated by combining the raw satellite image of the AOI with the masked plume into a single composite image that shows the methane plume in its geographic context.
Figure 4 – RGB image, fractional reflectance change in TOA reflectance (SWIR spectrum), and methane plume overlay for AOI
Solution validation with real-world methane emission events
As a final step, we evaluate our method for its ability to correctly detect and pinpoint methane leakages from a range of sources and geographies. First, we use a controlled methane release experiment specifically designed for the validation of space-based point-source detection and quantification of onshore methane emissions. In this 2021 experiment, researchers performed several methane releases in Ehrenberg, Arizona over a 19-day period. Running our detection method for one of the Sentinel-2 passes during the time of that experiment produces the following result showing a methane plume:
Figure 5 – Methane plume intensities for Arizona Controlled Release Experiment
The plume generated during the controlled release is clearly identified by our detection method. The same is true for other known real-world leakages (in Figure 6 that follows) from sources such as a landfill in East Asia (left) or an oil and gas facility in North America (right).
Figure 6 – Methane plume intensities for an East Asian landfill (left) and an oil and gas field in North America (right)
In sum, our method can help identify methane emissions both from controlled releases and from various real-world point sources across the globe. This works best for on-shore point sources with limited surrounding vegetation. It does not work for off-shore scenes due to the high absorption (that is, low transmittance) of the SWIR spectrum by water. Given that the proposed detection algorithm relies on variations in methane intensity, our method also requires pre-leakage observations. This can make monitoring of leakages with constant emission rates challenging.
Clean up
To avoid incurring unwanted charges after a methane monitoring job has completed, ensure that you terminate the SageMaker instance and delete any unwanted local files.
Conclusion
By combining SageMaker geospatial capabilities with open geospatial data sources you can implement your own highly customized remote monitoring solutions at scale. This blog post focused on methane detection, a focal area for governments, NGOs and other organizations seeking to detect and ultimately avoid harmful methane emissions. You can get started today in your own journey into geospatial analytics by spinning up a Notebook with the SageMaker geospatial kernel and implement your own detection solution. See the GitHub repository to get started building your own satellite-based methane detection solution. Also check out the sagemaker-examples repository for further examples and tutorials on how to use SageMaker geospatial capabilities in other real-world remote sensing applications.
About the authors
Dr. Karsten Schroer is a Solutions Architect at AWS. He supports customers in leveraging data and technology to drive sustainability of their IT infrastructure and build cloud-native data-driven solutions that enable sustainable operations in their respective verticals. Karsten joined AWS following his PhD studies in applied machine learning & operations management. He is truly passionate about technology-enabled solutions to societal challenges and loves to dive deep into the methods and application architectures that underlie these solutions.
Janosch Woschitz is a Senior Solutions Architect at AWS, specializing in geospatial AI/ML. With over 15 years of experience, he supports customers globally in leveraging AI and ML for innovative solutions that capitalize on geospatial data. His expertise spans machine learning, data engineering, and scalable distributed systems, augmented by a strong background in software engineering and industry expertise in complex domains such as autonomous driving.
More efficient recovery from failures during large-ML-model training
Novel “checkpointing” scheme that uses CPU memory reduces the time wasted on failure recovery by more than 92%.Read More
Intelligent document processing with Amazon Textract, Amazon Bedrock, and LangChain
In today’s information age, the vast volumes of data housed in countless documents present both a challenge and an opportunity for businesses. Traditional document processing methods often fall short in efficiency and accuracy, leaving room for innovation, cost-efficiency, and optimizations. Document processing has witnessed significant advancements with the advent of Intelligent Document Processing (IDP). With IDP, businesses can transform unstructured data from various document types into structured, actionable insights, dramatically enhancing efficiency and reducing manual efforts. However, the potential doesn’t end there. By integrating generative artificial intelligence (AI) into the process, we can further enhance IDP capabilities. Generative AI not only introduces enhanced capabilities in document processing, it also introduces a dynamic adaptability to changing data patterns. This post takes you through the synergy of IDP and generative AI, unveiling how they represent the next frontier in document processing.
We discuss IDP in detail in our series Intelligent document processing with AWS AI services (Part 1 and Part 2). In this post, we discuss how to extend a new or existing IDP architecture with large language models (LLMs). More specifically, we discuss how we can integrate Amazon Textract with LangChain as a document loader and Amazon Bedrock to extract data from documents and use generative AI capabilities within the various IDP phases.
Amazon Textract is a machine learning (ML) service that automatically extracts text, handwriting, and data from scanned documents. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) through easy-to-use APIs.
Solution overview
The following diagram is a high-level reference architecture that explains how you can further enhance an IDP workflow with foundation models. You can use LLMs in one or all phases of IDP depending on the use case and desired outcome.
- Document classification – In addition to using Amazon Comprehend, you can use an LLM to classify documents using few-shot prompting. Few-shot prompt involves prompting the language model with a few examples of different classes and a list of all possible classes, and then asking the model to classify a given piece of text from a document using one of the classes.
- Summarization – You can also use LLMs to summarize larger documents to provide precise summaries within the extraction phase of IDP. For example, a financial analysis system may involve analyzing hundreds of pages of earnings documents of a company. You can use a language model to summarize the key aspects of the earnings, enabling analysts to make business decisions.
- Standardization and in-context Q&A – In addition to extracting exact information out of documents using the Amazon Textract Analyze Document functionality, you can use LLMs to extract information that may otherwise not be explicitly inferred from a document. For example, a patient discharge summary may have the patient’s hospital admit date and discharge date but may not explicitly specify the total number of days the patient was in the hospital. You can use an LLM to deduce the total number of days the patient was admitted in the hospital, given the two dates extracted by Amazon Textract. This value can then be assigned with a well-known alias in a key-value format, also known as a normalized key, which makes consumption and post-processing even more straightforward.
- Templating and normalizations – An IDP pipeline often generates output that must conform to a specific deterministic schema. This is so that the output generated using the IDP workflow can be consumed into a downstream system, for example a relational database. The benefit of defining a deterministic schema is also to achieve key normalization so that we have a known set of keys to process in our postprocessing logic. For example, we may want to define “DOB” as a normalized key for “date of birth,” “birth date,” “birthday date,” “date born,” and so on, because documents may come with any variation of these. We use LLMs to perform such templating and normalized key-value extractions on any document.
- Spellcheck and corrections – Although Amazon Textract can extract the exact values from scanned documents (printed or handwritten), you can use a language model to identify if word misspellings and grammatical errors exist in the extracted data from. This is important in situations where the data may be extracted from poor quality or handwritten documents and used for generating marketing materials, flash reports, and so on. In addition to having a human manually review low-score extractions from Amazon Textract, you can use an LLM to augment the review process by providing correction recommendations to the human reviewer, thereby speeding up the review process.
In the following sections, we dive deep into how Amazon Textract is integrated into generative AI workflows using LangChain to process documents for each of these specific tasks. The code blocks provided here have been trimmed down for brevity. Refer to our GitHub repository for detailed Python notebooks and a step-by-step walkthrough.
Amazon Textract LangChain document loader
Text extraction from documents is a crucial aspect when it comes to processing documents with LLMs. You can use Amazon Textract to extract unstructured raw text from documents and preserve the original semi-structured or structured objects like key-value pairs and tables present in the document. Document packages like healthcare and insurance claims or mortgages consist of complex forms that contain a lot of information across structured, semi-structured, and unstructured formats. Document extraction is an important step here because LLMs benefit from the rich content to generate more accurate and relevant responses, which otherwise could impact the quality of the LLMs’ output.
LangChain is a powerful open-source framework for integrating with LLMs. LLMs in general are versatile but may struggle with domain-specific tasks where deeper context and nuanced responses are needed. LangChain empowers developers in such scenarios to build agents that can break down complex tasks into smaller sub-tasks. The sub-tasks can then introduce context and memory into LLMs by connecting and chaining LLM prompts.
LangChain offers document loaders that can load and transform data from documents. You can use them to structure documents into preferred formats that can be processed by LLMs. The AmazonTextractPDFLoader is a service loader type of document loader that provides quick way to automate document processing by using Amazon Textract in combination with LangChain. For more details on AmazonTextractPDFLoader
, refer to the LangChain documentation. To use the Amazon Textract document loader, you start by importing it from the LangChain library:
You can load a document from a HTTPS URL endpoint as well as documents hosted in Amazon Simple Storage Service (Amazon S3) buckets via Amazon S3 object URLs (also called path style access):
You can also store documents in Amazon S3 and refer to them using the s3:// URL pattern, as explained in Accessing a bucket using S3://, and pass this S3 path to the Amazon Textract PDF loader:
A multi-page document will contain multiple pages of text, which can then be accessed via the documents object, which is a list of pages. The following code loops through the pages in the documents object and prints the document text, which is available via the page_content
attribute:
Document classification
Amazon Comprehend and LLMs can be effectively utilized for document classification. Amazon Comprehend is a natural language processing (NLP) service that uses ML to extract insights from text. Amazon Comprehend also supports custom classification model training with layout awareness on documents like PDFs, Word, and image formats. For more information about using the Amazon Comprehend document classifier, refer to Amazon Comprehend document classifier adds layout support for higher accuracy.
When paired with LLMs, document classification becomes a powerful approach for managing large volumes of documents. LLMs are helpful in document classification because they can analyze the text, patterns, and contextual elements in the document using natural language understanding. You can also fine-tune them for specific document classes. When a new document type introduced in the IDP pipeline needs classification, the LLM can process text and categorize the document given a set of classes. The following is a sample code that uses the LangChain document loader powered by Amazon Textract to extract the text from the document and use it for classifying the document. We use the Anthropic Claude v2 model via Amazon Bedrock to perform the classification.
In the following example, we first extract text from a patient discharge report and use an LLM to classify it given a list of three different document types—DISCHARGE_SUMMARY
, RECEIPT
, and PRESCRIPTION
. The following screenshot shows our report.
We use the following code:
The code produces the following output:
The provided document is a DISCHARGE_SUMMARY
Document summarization
Summarization involves condensing a given text or document into a shorter version while retaining its key information. This technique is beneficial for efficient information retrieval, which enables users to quickly grasp the key points of a document without reading the entire content. Although Amazon Textract doesn’t directly perform text summarization, it provides the foundational capabilities of extracting the entire text from documents. This extracted text serves as an input to our LLM model for performing text summarization tasks.
Using the same sample discharge report, we use AmazonTextractPDFLoader
to extract text from this document. As before, we use the Claude v2 model via Amazon Bedrock and initialize it with a prompt that contains the instructions on what to do with the text (in this case, summarization). Finally, we run the LLM chain by passing in the extracted text from the document loader. This runs an inference action on the LLM with the prompt that consists of the instructions to summarize, and the document’s text marked by Document
. See the following code:
From aerospace to Amazon, Nia Jetter is blazing new paths
Jetter says her goals include lowering barriers to understanding technology and cultivating a more diverse workforce.Read More
T-Mobile US, Inc. uses artificial intelligence through Amazon Transcribe and Amazon Translate to deliver voicemail in the language of their customers’ choice
This post is co-authored by Dhurjati Brahma, Senior Systems Architect at T-Mobile US, Inc and Jim Chao, Principal Engineer/Architect at T-Mobile US, Inc and Nicholas Zellerhoff Associate Systems Architect at T-Mobile US, Inc.
T-Mobile US, Inc. provides a Voicemail to Text service to its customers, which allows customers to quickly read through their voicemails and respond to and manage messages in any order without having to dial into their voicemailbox. This service is delivered by the T-Mobile Voicemail system and uses Amazon Transcribe to convert voicemail messages to text. In 2023, T-Mobile launched the Voicemail to Text Translate feature. Powered by Amazon Translate, this feature lets customers request voicemail transcriptions in their language of choice from the native Visual Voicemail application available on major Android manufacturers’ devices, beginning with flagship devices and all future devices available from major device partners.
NATIVE VISUAL VOICEMAIL DIALER FEATURING VOICEMAIL TO TEXT TRANSLATE FEATURE – ALL MODELS | ||
The history
Two years ago, in 2021, T-Mobile engineering teams partnered with AWS to launch a new AI-powered feature called Voicemail to Text with automatic language detection and to improve quality and performance for customers. Voicemail to Text provided customers with the additional benefit of receiving voicemail transcriptions at no extra cost. Voicemail to Text is available in 39 different spoken languages and dialects and uses auto-language detection provided by Amazon Transcribe. These languages included English and Spanish, as well as many European, Middle Eastern, Asian, and African languages. The full list of supported languages can be found in Amazon Transcribe documentation. Since its introduction, the usage of Voicemail to Text service has grown and it transcribes 126 million voicemail messages per month as of July 2023. T-Mobile has partnered with AWS to analyze the key application metrics of this service, such as language distribution, the number of messages per language, the daily total and unique active customers, and so on. This data helped in scaling the service and making key business decisions to improve the Voicemail to Text customer experience.
The challenge
Upon analysis of the weekly language distribution of voicemail transcriptions, T-Mobile observed that approximately 10 percent of voicemail transcriptions were received in a language other than US English, with Spanish being the predominant language.
In addition, market research based on U.S. Census Bureau data showed that approximately 22 percent of the US population spoke a language other than English. This showed a need for a Voicemail to Text translation feature that could bridge this language gap and drove T-Mobile and AWS teams to brainstorm a solution. The idea was to give T-Mobile’s Voicemail to Text customers an option to receive voicemail transcriptions in the language of their choice delivered through SMS, email, or through the Visual Voicemail (VVM) application.
The solution
T-Mobile decided to use Amazon Translate, a neural machine translation (MT) service to augment their existing Voicemail to Text service with real-time text translation between supported languages because Amazon Translate provided the high-quality translation services required in the industry. T-Mobile already had their voicemail system connected to AWS through a private AWS Direct Connect link and was using the Amazon Transcribe API to get transcriptions. Following the same design pattern, T-Mobile added an integration with the Amazon Translate API to translate voicemail transcripts from the source language detected by Amazon Transcribe to the customers’ preferred language.
Here is a high-level architecture diagram that illustrates the T-Mobile Voicemail to Text Translate solution.
From a customer perspective, to enable the Visual Voicemail Translate feature, a customer needs a Voicemail to Text feature service operator code (SOC) enabled on their mobile plan and should have one of the supported major Android manufacturer devices with the Translate feature API enabled. The customer can then visit the Visual Voicemail settings page to select a language from a list of 75 different languages and dialects supported by Amazon Translate. This will enable customers to receive voicemail transcription in the supported language of their choice.
The results
With Amazon Translate, T-Mobile was able to deliver a delightful new customer experience that accommodates the language preference of its customers and makes voicemail more accessible to people who speak various languages. This new capability helps to break language barriers by making it easier for speakers of different languages to communicate.
Conclusion
By using Amazon Transcribe and Amazon Translate language AI services, T-Mobile was able to enhance its voicemail service by delivering message transcriptions in a language that customers can understand. By choosing to use AWS managed AI services, T-Mobile was able to expedite the delivery of this new customer experience and avoid operational burdens of maintaining additional servers and software in their data centers. With Amazon Transcribe and Amazon Translate, Voicemail to Text and Voicemail to Text Translate services are delivered with low latency and high accuracy.
For more information, check out Getting started with Amazon Translate and Getting started with Amazon Transcribe to explore how you can use these services with your applications. Follow the Artificial Intelligence category on AWS Machine Learning Blog to stay up to date with new capabilities and use cases for various AWS AI services.
About the Authors
Dhurjati Brahma is a Senior Systems Architect at T-Mobile US, Inc. He has over 15+ years of experience in designing, building, and managing robust and scalable virtualized messaging solutions within T-Mobile’s network. He is passionate to collaborate with various cross-functional teams and vendors to securely integrate T-Mobile’s messaging systems with public cloud to launch exciting new products and services for T-Mobile’s customers. He holds a Master’s degree in Electrical Engineering from University of Alabama at Birmingham. Outside of work, he enjoys going on hikes, listening to classical music, practicing meditation, and spending time with his family and friends.
Jim Chao is the Principal Engineer/Architect at Core Messaging Service Design at T-Mobile US, Inc. He has more than two decades of experience in design and architecture of mobile messaging service systems and platforms. Lately, he has been dedicating his time to the next generation of messaging service using machine learning and generative AI. He holds a Master’s degree of Computer Information Systems. Outside the work he spends time with family and does a lot of religious study and practice as well as traveling to the religious places above 5000 meters in the mountains of Tibet.
Nicholas Zellerhoff is an Associate Systems Architect for T-Mobile as part of the Service Technology Development team functioning as lead development engineer for Native Visual Voicemail services. When not in office, Nick enjoys everything outdoors, from backyard BBQs with friends to backcountry hiking in the North Cascades.
Alex Bulatkin is a solutions architect at AWS. He enjoys helping communication service providers build innovative solutions in AWS that are redefining the telecom industry. He is passionate about working with customers on bringing the power of AWS AI services into their applications. Alex is based in the Denver metropolitan area and likes to hike, ski, and snowboard.
Prabhakaran Balasubramaniam is a Principal Customer Solutions Manager at AWS. He loves to help telecom customers leverage new technologies to solve their problems. Prabhakaran is based in the Dallas-Fort Worth area and likes sports.
From text to dream job: Building an NLP-based job recommender at Talent.com with Amazon SageMaker
This post is co-authored by Anatoly Khomenko, Machine Learning Engineer, and Abdenour Bezzouh, Chief Technology Officer at Talent.com.
Founded in 2011, Talent.com is one of the world’s largest sources of employment. The company combines paid job listings from their clients with public job listings into a single searchable platform. With over 30 million jobs listed in more than 75 countries, Talent.com serves jobs across many languages, industries, and distribution channels. The result is a platform that matches millions of job seekers with available jobs.
Talent.com’s mission is to centralize all jobs available on the web to help job seekers find their best match while providing them with the best search experience. Its focus is on relevancy, because the order of the recommended jobs is vitally important to show the jobs most pertinent to users’ interests. The performance of Talent.com’s matching algorithm is paramount to the success of the business and a key contributor to their users’ experience. It’s challenging to predict which jobs are pertinent to a job seeker based on the limited amount of information provided, usually contained to a few keywords and a location.
Given this mission, Talent.com and AWS joined forces to create a job recommendation engine using state-of-the-art natural language processing (NLP) and deep learning model training techniques with Amazon SageMaker to provide an unrivaled experience for job seekers. This post shows our joint approach to designing a job recommendation system, including feature engineering, deep learning model architecture design, hyperparameter optimization, and model evaluation that ensures the reliability and effectiveness of our solution for both job seekers and employers. The system is developed by a team of dedicated applied machine learning (ML) scientists, ML engineers, and subject matter experts in collaboration between AWS and Talent.com.
The recommendation system has driven an 8.6% increase in clickthrough rate (CTR) in online A/B testing against a previous XGBoost-based solution, helping connect millions of Talent.com’s users to better jobs.
Overview of solution
An overview of the system is illustrated in the following figure. The system takes a user’s search query as input and outputs a ranked list of jobs in order of pertinence. Job pertinence is measured by the click probability (the probability of a job seeker clicking on a job for more information).
The system includes four main components:
- Model architecture – The core of this job recommendation engine is a deep learning-based Triple Tower Pointwise model, which includes a query encoder that encodes user search queries, a document encoder that encodes the job descriptions, and an interaction encoder that processes the past user-job interaction features. The outputs of the three towers are concatenated and passed through a classification head to predict the job’s click probabilities. By training this model on search queries, job specifics, and historical user interaction data from Talent.com, this system provides personalized and highly relevant job recommendations to job seekers.
- Feature engineering – We perform two sets of feature engineering to extract valuable information from input data and feed it into the corresponding towers in the model. The two sets are standard feature engineering and fine-tuned Sentence-BERT (SBERT) embeddings. We use the standard engineered features as input into the interaction encoder and feed the SBERT derived embedding into the query encoder and document encoder.
- Model optimization and tuning – We utilize advanced training methodologies to train, test, and deploy the system with SageMaker. This includes SageMaker Distributed Data Parallel (DDP) training, SageMaker Automatic Model Tuning (AMT), learning rate scheduling, and early stopping to improve model performance and training speed. Using the DDP training framework helped speed up our model training to approximately eight times faster.
- Model evaluation – We conduct both offline and online evaluation. We evaluate the model performance with Area Under the Curve (AUC) and Mean Average Precision at K (mAP@K) in offline evaluation. During online A/B testing, we evaluate the CTR improvements.
In the following sections, we present the details of these four components.
Deep learning model architecture design
We design a Triple Tower Deep Pointwise (TTDP) model using a triple-tower deep learning architecture and the pointwise pair modeling approach. The triple-tower architecture provides three parallel deep neural networks, with each tower processing a set of features independently. This design pattern allows the model to learn distinct representations from different sources of information. After the representations from all three towers are obtained, they are concatenated and passed through a classification head to make the final prediction (0–1) on the click probability (a pointwise modeling setup).
The three towers are named based on the information they process: the query encoder processes the user search query, the document encoder processes the candidate job’s documentational contents including the job title and company name, and the interaction encoder uses relevant features extracted from past user interactions and history (discussed more in the next section).
Each of these towers plays a crucial role in learning how to recommend jobs:
- Query encoder – The query encoder takes in the SBERT embeddings derived from the user’s job search query. We enhance the embeddings through an SBERT model we fine-tuned. This encoder processes and understands the user’s job search intent, including details and nuances captured by our domain-specific embeddings.
- Document encoder – The document encoder processes the information of each job listing. Specifically, it takes the SBERT embeddings of the concatenated text from the job title and company. The intuition is that users will be more interested in candidate jobs that are more relevant to the search query. By mapping the jobs and the search queries to the same vector space (defined by SBERT), the model can learn to predict the probability of the potential jobs a job seeker will click.
- Interaction encoder – The interaction encoder deals with the user’s past interactions with job listings. The features are produced via a standard feature engineering step, which includes calculating popularity metrics for job roles and companies, establishing context similarity scores, and extracting interaction parameters from previous user engagements. It also processes the named entities identified in the job title and search queries with a pre-trained named entity recognition (NER) model.
Each tower generates an independent output in parallel, all of which are then concatenated together. This combined feature vector is then passed to predict the click probability of a job listing for a user query. The triple-tower architecture provides flexibility in capturing complex relationships between different inputs or features, allowing the model to take advantage of the strengths of each tower while learning more expressive representations for the given task.
Candidate jobs’ predicted click probabilities are ranked from high to low, generating personalized job recommendations. Through this process, we ensure that each piece of information—whether it’s the user’s search intent, job listing details, or past interactions—is fully captured by a specific tower dedicated to it. The complex relationships between them are also captured through the combination of the tower outputs.
Feature engineering
We perform two sets of feature engineering processes to extract valuable information from the raw data and feed it into the corresponding towers in the model: standard feature engineering and fine-tuned SBERT embeddings.
Standard feature engineering
Our data preparation process begins with standard feature engineering. Overall, we define four types of features:
- Popularity – We calculate popularity scores at the individual job level, occupation level, and company level. This provides a metric of how attractive a particular job or company might be.
- Textual similarity – To understand the contextual relationship between different textual elements, we compute similarity scores, including string similarity between the search query and the job title. This helps us gauge the relevance of a job opening to a job seeker’s search or application history.
- Interaction – In addition, we extract interaction features from past user engagements with job listings. A prime example of this is the embedding similarity between past clicked job titles and candidate job titles. This measure helps us understand the similarity between previous jobs a user has shown interest in vs. upcoming job opportunities. This enhances the precision of our job recommendation engine.
- Profile – Lastly, we extract user-defined job interest information from the user profile and compare it with new job candidates. This helps us understand if a job candidate matches a user’s interest.
A crucial step in our data preparation is the application of a pre-trained NER model. By implementing an NER model, we can identify and label named entities within job titles and search queries. Consequently, this allows us to compute similarity scores between these identified entities, providing a more focused and context-aware measure of relatedness. This methodology reduces the noise in our data and gives us a more nuanced, context-sensitive method of comparing jobs.
Fine-tuned SBERT embeddings
To enhance the relevance and accuracy of our job recommendation system, we use the power of SBERT, a powerful transformer-based model, known for its proficiency in capturing semantic meanings and contexts from text. However, generic embeddings like SBERT, although effective, may not fully capture the unique nuances and terminologies inherent in a specific domain such as ours, which centers around employment and job searches. To overcome this, we fine-tune the SBERT embeddings using our domain-specific data. This fine-tuning process optimizes the model to better understand and process the industry-specific language, jargon, and context, making the embeddings more reflective of our specific domain. As a result, the refined embeddings offer improved performance in capturing both semantic and contextual information within our sphere, leading to more accurate and meaningful job recommendations for our users.
The following figure illustrates the SBERT fine-tuning step.
We fine-tune SBERT embeddings using TripletLoss with a cosine distance metric that learns text embedding where anchor and positive texts have a higher cosine similarity than anchor and negative texts. We use users’ search queries as anchor texts. We combine job titles and employer names as inputs to the positive and negative texts. The positive texts are sampled from job postings that the corresponding user clicked on, whereas the negative texts are sampled from job postings that the user did not click on. The following is sample implementation of the fine-tuning procedure:
Model training with SageMaker Distributed Data Parallel
We use SageMaker Distributed Data Parallel (SMDDP), a feature of the SageMaker ML platform that is built on top of PyTorch DDP. It provides an optimized environment for running PyTorch DDP training jobs on the SageMaker platform. It’s designed to significantly speed up deep learning model training. It accomplishes this by splitting a large dataset into smaller chunks and distributing them across multiple GPUs. The model is replicated on every GPU. Each GPU processes its assigned data independently, and the results are collated and synchronized across all GPUs. DDP takes care of gradient communication to keep model replicas synchronized and overlaps them with gradient computations to speed up training. SMDDP utilizes an optimized AllReduce algorithm to minimize communication between GPUs, reducing synchronization time and improving overall training speed. The algorithm adapts to different network conditions, making it highly efficient for both on-premises and cloud-based environments. In the SMDDP architecture (as shown in the following figure), distributed training is also scaled using a cluster of many nodes. This means not just multiple GPUs in a computing instance, but many instances with multiple GPUs, which further speeds up training.
For more information about this architecture, refer to Introduction to SageMaker’s Distributed Data Parallel Library.
With SMDDP, we have been able to substantially reduce the training time for our TTDP model, making it eight times faster. Faster training times mean we can iterate and improve our models more quickly, leading to better job recommendations for our users in a shorter amount of time. This efficiency gain is instrumental in maintaining the competitiveness of our job recommendation engine in a fast-evolving job market.
You can adapt your training script with the SMDDP with only three lines of code, as shown in the following code block. Using PyTorch as an example, the only thing you need to do is import the SMDDP library’s PyTorch client (smdistributed.dataparallel.torch.torch_smddp
). The client registers smddp
as a backend for PyTorch.
After you have a working PyTorch script that is adapted to use the distributed data parallel library, you can launch a distributed training job using the SageMaker Python SDK.
Evaluating model performance
When evaluating the performance of a recommendation system, it’s crucial to choose metrics that align closely with business goals and provide a clear understanding of the model’s effectiveness. In our case, we use the AUC to evaluate our TTDP model’s job click prediction performance and the mAP@K to assess the quality of the final ranked jobs list.
The AUC refers to the area under the receiver operating characteristic (ROC) curve. It represents the probability that a randomly chosen positive example will be ranked higher than a randomly chosen negative example. It ranges from 0–1, where 1 indicates an ideal classifier and 0.5 represents a random guess. mAP@K is a metric commonly used to assess the quality of information retrieval systems, such as our job recommender engine. It measures the average precision of retrieving the top K relevant items for a given query or user. It ranges from 0–1, with 1 indicating optimal ranking and 0 indicating the lowest possible precision at the given K value. We evaluate the AUC, mAP@1, and mAP@3. Collectively, these metrics allow us to gauge the model’s ability to distinguish between positive and negative classes (AUC) and its success at ranking the most relevant items at the top (mAP@K).
Based on our offline evaluation, the TTDP model outperformed the baseline model—the existing XGBoost-based production model—by 16.65% for AUC, 20% for mAP@1, and 11.82% for mAP@3.
Furthermore, we designed an online A/B test to evaluate the proposed system and ran the test on a percentage of the US email population for 6 weeks. In total, approximately 22 million emails were sent using the job recommended by the new system. The resulting uplift in clicks compared to the previous production model was 8.6%. Talent.com is gradually increasing the percentage to roll out the new system to its complete population and channels.
Conclusion
Creating a job recommendation system is a complex endeavor. Each job seeker has unique needs, preferences, and professional experiences that can’t be inferred from a short search query. In this post, Talent.com collaborated with AWS to develop an end-to-end deep learning-based job recommender solution that ranks lists of jobs to recommend to users. The Talent.com team truly enjoyed collaborating with the AWS team throughout the process of solving this problem. This marks a significant milestone in Talent.com’s transformative journey, as the team takes advantage of the power of deep learning to empower its business.
This project was fine-tuned using SBERT to generate text embeddings. At the time of writing, AWS introduced Amazon Titan Embeddings as part of their foundational models (FMs) offered through Amazon Bedrock, which is a fully managed service providing a selection of high-performing foundational models from leading AI companies. We encourage readers to explore the machine learning techniques presented in this blog post and leverage the capabilities provided by AWS, such as SMDDP, while making use of AWS Bedrock’s foundational models to create their own search functionalities.
References
- SBERT Training Overview
- PyTorch Distributed Overview
- The SageMaker Distributed Data Parallel Library Overview
- Introduction to SageMaker’s Distributed Data Parallel Library
About the authors
Yi Xiang is a Applied Scientist II at the Amazon Machine Learning Solutions Lab, where she helps AWS customers across different industries accelerate their AI and cloud adoption.
Tong Wang is a Senior Applied Scientist at the Amazon Machine Learning Solutions Lab, where he helps AWS customers across different industries accelerate their AI and cloud adoption.
Dmitriy Bespalov is a Senior Applied Scientist at the Amazon Machine Learning Solutions Lab, where he helps AWS customers across different industries accelerate their AI and cloud adoption.
Anatoly Khomenko is a Senior Machine Learning Engineer at Talent.com with a passion for natural language processing matching good people to good jobs.
Abdenour Bezzouh is an executive with more than 25 years experience building and delivering technology solutions that scale to millions of customers. Abdenour held the position of Chief Technology Officer (CTO) at Talent.com when the AWS team designed and executed this particular solution for Talent.com.
Dale Jacques is a Senior AI Strategist within the Generative AI Innovation Center where he helps AWS customers translate business problems into AI solutions.
Yanjun Qi is a Senior Applied Science Manager at the Amazon Machine Learning Solution Lab. She innovates and applies machine learning to help AWS customers speed up their AI and cloud adoption.