Map Earth’s vegetation in under 20 minutes with Amazon SageMaker

Map Earth’s vegetation in under 20 minutes with Amazon SageMaker

In today’s rapidly changing world, monitoring the health of our planet’s vegetation is more critical than ever. Vegetation plays a crucial role in maintaining an ecological balance, providing sustenance, and acting as a carbon sink. Traditionally, monitoring vegetation health has been a daunting task. Methods such as field surveys and manual satellite data analysis are not only time-consuming, but also require significant resources and domain expertise. These traditional approaches are cumbersome. This often leads to delays in data collection and analysis, making it difficult to track and respond swiftly to environmental changes. Furthermore, the high costs associated with these methods limit their accessibility and frequency, hindering comprehensive and ongoing global vegetation monitoring efforts at a planetary scale. In light of these challenges, we have developed an innovative solution to streamline and enhance the efficiency of vegetation monitoring processes on a global scale.

Transitioning from the traditional, labor-intensive methods of monitoring vegetation health, Amazon SageMaker geospatial capabilities offer a streamlined, cost-effective solution. Amazon SageMaker supports geospatial machine learning (ML) capabilities, allowing data scientists and ML engineers to build, train, and deploy ML models using geospatial data. These geospatial capabilities open up a new world of possibilities for environmental monitoring. With SageMaker, users can access a wide array of geospatial datasets, efficiently process and enrich this data, and accelerate their development timelines. Tasks that previously took days or even weeks to accomplish can now be done in a fraction of the time.

In this post, we demonstrate the power of SageMaker geospatial capabilities by mapping the world’s vegetation in under 20 minutes. This example not only highlights the efficiency of SageMaker, but also its impact how geospatial ML can be used to monitor the environment for sustainability and conservation purposes.

Identify areas of interest

We begin by illustrating how SageMaker can be applied to analyze geospatial data at a global scale. To get started, we follow the steps outlined in Getting Started with Amazon SageMaker geospatial capabilities. We start with the specification of the geographical coordinates that define a bounding box covering the areas of interest. This bounding box acts as a filter to select only the relevant satellite images that cover the Earth’s land masses.

import os
import json
import time
import boto3
import geopandas
from shapely.geometry import Polygon
import leafmap.foliumap as leafmap
import sagemaker
import sagemaker_geospatial_map

session = boto3.Session()
execution_role = sagemaker.get_execution_role()
sg_client = session.client(service_name="sagemaker-geospatial")
cooridinates =[
    [-179.034845, -55.973798],
    [179.371094, -55.973798],
    [179.371094, 83.780085],
    [-179.034845, 83.780085],
    [-179.034845, -55.973798]
]           
polygon = Polygon(cooridinates)
world_gdf = geopandas.GeoDataFrame(index=[0], crs='epsg:4326', geometry=[polygon])
m = leafmap.Map(center=[37, -119], zoom=4)
m.add_basemap('Esri.WorldImagery')
m.add_gdf(world_gdf, layer_name="AOI", style={"color": "red"})
m

Sentinel 2 coverage of Earth's land mass

Data acquisition

SageMaker geospatial capabilities provide access to a wide range of public geospatial datasets, including Sentinel-2, Landsat 8, Copernicus DEM, and NAIP. For our vegetation mapping project, we’ve selected Sentinel-2 for its global coverage and update frequency. The Sentinel-2 satellite captures images of Earth’s land surface at a resolution of 10 meters every 5 days. We pick the first week of December 2023 in this example. To make sure we cover most of the visible earth surface, we filter for images with less than 10% cloud coverage. This way, our analysis is based on clear and reliable imagery.

search_rdc_args = {
    "Arn": "arn:aws:sagemaker-geospatial:us-west-2:378778860802:raster-data-collection/public/nmqj48dcu3g7ayw8", # sentinel-2 L2A
    "RasterDataCollectionQuery": {
        "AreaOfInterest": {
            "AreaOfInterestGeometry": {
                "PolygonGeometry": {
                    "Coordinates": [
                        [
                            [-179.034845, -55.973798],
                            [179.371094, -55.973798],
                            [179.371094, 83.780085],
                            [-179.034845, 83.780085],
                            [-179.034845, -55.973798]
                        ]
                    ]
                }
            }
        },
        "TimeRangeFilter": {
            "StartTime": "2023-12-01T00:00:00Z",
            "EndTime": "2023-12-07T23:59:59Z",
        },
        "PropertyFilters": {
            "Properties": [{"Property": {"EoCloudCover": {"LowerBound": 0, "UpperBound": 10}}}],
            "LogicalOperator": "AND",
        },
    }
}

s2_items = []
s2_tile_ids = []
s2_geometries = {
    'id': [],
    'geometry': [],
}
while search_rdc_args.get("NextToken", True):
    search_result = sg_client.search_raster_data_collection(**search_rdc_args)
    for item in search_result["Items"]:
        s2_id = item['Id']
        s2_tile_id = s2_id.split('_')[1]
        # filtering out tiles cover the same area
        if s2_tile_id not in s2_tile_ids:
            s2_tile_ids.append(s2_tile_id)
            s2_geometries['id'].append(s2_id)
            s2_geometries['geometry'].append(Polygon(item['Geometry']['Coordinates'][0]))
            del item['DateTime']
            s2_items.append(item)  

    search_rdc_args["NextToken"] = search_result.get("NextToken")

print(f"{len(s2_items)} unique Sentinel-2 images found.")

By utilizing the search_raster_data_collection function from SageMaker geospatial, we identified 8,581 unique Sentinel-2 images taken in the first week of December 2023. To validate the accuracy in our selection, we plotted the footprints of these images on a map, confirming that we had the correct images for our analysis.

s2_gdf = geopandas.GeoDataFrame(s2_geometries)
m = leafmap.Map(center=[37, -119], zoom=4)
m.add_basemap('OpenStreetMap')
m.add_gdf(s2_gdf, layer_name="Sentinel-2 Tiles", style={"color": "blue"})
m

Sentinel 2 image footprints

SageMaker geospatial processing jobs

When querying data with SageMaker geospatial capabilities, we received comprehensive details about our target images, including the data footprint, properties around spectral bands, and hyperlinks for direct access. With these hyperlinks, we can bypass traditional memory and storage-intensive methods of first downloading and subsequently processing images locally—a task made even more daunting by the size and scale of our dataset, spanning over 4 TB. Each of the 8,000 images are large in size, have multiple channels, and are individually sized at approximately 500 MB. Processing multiple terabytes of data on a single machine would be time-prohibitive. Although setting up a processing cluster is an alternative, it introduces its own set of complexities, from data distribution to infrastructure management. SageMaker geospatial streamlines this with Amazon SageMaker Processing. We use the purpose-built geospatial container with SageMaker Processing jobs for a simplified, managed experience to create and run a cluster. With just a few lines of code, you can scale out your geospatial workloads with SageMaker Processing jobs. You simply specify a script that defines your workload, the location of your geospatial data on Amazon Simple Storage Service (Amazon S3), and the geospatial container. SageMaker Processing provisions cluster resources for you to run city-, country-, or continent-scale geospatial ML workloads.

For our project, we’re using 25 clusters, with each cluster comprising 20 instances, to scale out our geospatial workload. Next, we divided the 8,581 images into 25 batches for efficient processing. Each batch contains approximately 340 images. These batches are then evenly distributed across the machines in a cluster. All batch manifests are uploaded to Amazon S3, ready for the processing job, so each segment is processed swiftly and efficiently.

def s2_item_to_relative_metadata_url(item):
    parts = item["Assets"]["visual"]["Href"].split("/")
    tile_prefix = parts[4:-1]
    return "{}/{}.json".format("/".join(tile_prefix), item["Id"])


num_jobs = 25
num_instances_per_job = 20 # maximum 20

manifest_list = {}
for idx in range(num_jobs):
    manifest = [{"prefix": "s3://sentinel-cogs/sentinel-s2-l2a-cogs/"}]
    manifest_list[idx] = manifest
# split the manifest for N processing jobs
for idx, item in enumerate(s2_items):
    job_idx = idx%num_jobs
    manifest_list[job_idx].append(s2_item_to_relative_metadata_url(item))
    
# upload the manifest to S3
sagemaker_session = sagemaker.Session()
s3_bucket_name = sagemaker_session.default_bucket()
s3_prefix = 'processing_job_demo'
s3_client = boto3.client("s3")
s3 = boto3.resource("s3")

manifest_dir = "manifests"
os.makedirs(manifest_dir, exist_ok=True)

for job_idx, manifest in manifest_list.items():
    manifest_file = f"{manifest_dir}/manifest{job_idx}.json"
    s3_manifest_key = s3_prefix + "/" + manifest_file
    with open(manifest_file, "w") as f:
        json.dump(manifest, f)

    s3_client.upload_file(manifest_file, s3_bucket_name, s3_manifest_key)
    print("Uploaded {} to {}".format(manifest_file, s3_manifest_key))

With our input data ready, we now turn to the core analysis that will reveal insights into vegetation health through the Normalized Difference Vegetation Index (NDVI). NDVI is calculated from the difference between Near-infrared (NIR) and Red reflectances, normalized by their sum, yielding values that range from -1 to 1. Higher NDVI values signal dense, healthy vegetation, a value of zero indicates no vegetation, and negative values usually point to water bodies. This index serves as a critical tool for assessing vegetation health and distribution. The following is an example of what NDVI looks like.

Sentinel 2 true color image and NDVI

%%writefile scripts/compute_vi.py

import os
import rioxarray
import json
import gc
import warnings

warnings.filterwarnings("ignore")

if __name__ == "__main__":
    print("Starting processing")

    input_path = "/opt/ml/processing/input"
    output_path = "/opt/ml/processing/output"
    input_files = []
    items = []
    for current_path, sub_dirs, files in os.walk(input_path):
        for file in files:
            if file.endswith(".json"):
                full_file_path = os.path.join(input_path, current_path, file)
                input_files.append(full_file_path)
                with open(full_file_path, "r") as f:
                    items.append(json.load(f))

    print("Received {} input files".format(len(input_files)))

    for item in items:
        print("Computing NDVI for {}".format(item["id"]))
        red_band_url = item["assets"]["red"]["href"]
        nir_band_url = item["assets"]["nir"]["href"]
        scl_mask_url = item["assets"]["scl"]["href"]
        red = rioxarray.open_rasterio(red_band_url, masked=True)
        nir = rioxarray.open_rasterio(nir_band_url, masked=True)
        scl = rioxarray.open_rasterio(scl_mask_url, masked=True)
        scl_interp = scl.interp(
            x=red["x"], y=red["y"]
        )  # interpolate SCL to the same resolution as Red and NIR bands

        # mask out cloudy pixels using SCL (https://sentinels.copernicus.eu/web/sentinel/technical-guides/sentinel-2-msi/level-2a/algorithm-overview)
        # class 8: cloud medium probability
        # class 9: cloud high probability
        # class 10: thin cirrus
        red_cloud_masked = red.where((scl_interp != 8) & (scl_interp != 9) & (scl_interp != 10))
        nir_cloud_masked = nir.where((scl_interp != 8) & (scl_interp != 9) & (scl_interp != 10))

        ndvi = (nir_cloud_masked - red_cloud_masked) / (nir_cloud_masked + red_cloud_masked)
        # save the ndvi as geotiff
        s2_tile_id = red_band_url.split("/")[-2]
        file_name = f"{s2_tile_id}_ndvi.tif"
        output_file_path = f"{output_path}/{file_name}"
        ndvi.rio.to_raster(output_file_path)
        print("Written output: {}".format(output_file_path))

        # keep memory usage low
        del red
        del nir
        del scl
        del scl_interp
        del red_cloud_masked
        del nir_cloud_masked
        del ndvi

        gc.collect()

Now we have the compute logic defined, we’re ready to start the geospatial SageMaker Processing job. This involves a straightforward three-step process: setting up the compute cluster, defining the computation specifics, and organizing the input and output details.

First, to set up the cluster, we decide on the number and type of instances required for the job, making sure they’re well-suited for geospatial data processing. The compute environment itself is prepared by selecting a geospatial image that comes with all commonly used packages for processing geospatial data.

Next, for the input, we use the previously created manifest that lists all image hyperlinks. We also designate an S3 location to save our results.

With these elements configured, we’re able to initiate multiple processing jobs at once, allowing them to operate concurrently for efficiency.

from multiprocessing import Process
import sagemaker
import boto3 
from botocore.config import Config
from sagemaker import get_execution_role
from sagemaker.sklearn.processing import ScriptProcessor
from sagemaker.processing import ProcessingInput, ProcessingOutput

role = get_execution_role()
geospatial_image_uri = '081189585635.dkr.ecr.us-west-2.amazonaws.com/sagemaker-geospatial-v1-0:latest'
# use the retry behaviour of boto3 to avoid throttling issue
sm_boto = boto3.client('sagemaker', config=Config(connect_timeout=5, read_timeout=60, retries={'max_attempts': 20}))
sagemaker_session = sagemaker.Session(sagemaker_client = sm_boto)

def run_job(job_idx):
    s3_manifest = f"s3://{s3_bucket_name}/{s3_prefix}/{manifest_dir}/manifest{job_idx}.json"
    s3_output = f"s3://{s3_bucket_name}/{s3_prefix}/output"
    script_processor = ScriptProcessor(
        command=['python3'],
        image_uri=geospatial_image_uri,
        role=role,
        instance_count=num_instances_per_job,
        instance_type='ml.m5.xlarge',
        base_job_name=f'ca-s2-nvdi-{job_idx}',
        sagemaker_session=sagemaker_session,
    )

    script_processor.run(
        code='scripts/compute_vi.py',
        inputs=[
            ProcessingInput(
                source=s3_manifest,
                destination='/opt/ml/processing/input/',
                s3_data_type='ManifestFile',
                s3_data_distribution_type="ShardedByS3Key"
            ),
        ],
        outputs=[
            ProcessingOutput(
                source='/opt/ml/processing/output/',
                destination=s3_output,
                s3_upload_mode='Continuous'
            )
        ],
    )
    time.sleep(2)

processes = []
for idx in range(num_jobs):
    p = Process(target=run_job, args=(idx,))
    processes.append(p)
    p.start()
    
for p in processes:
    p.join()

After you launch the job, SageMaker automatically spins up the required instances and configures the cluster to process the images listed in your input manifest. This entire setup operates seamlessly, without needing your hands-on management. To monitor and manage the processing jobs, you can use the SageMaker console. It offers real-time updates on the status and completion of your processing tasks. In our example, it took under 20 minutes to process all 8,581 images with 500 instances. The scalability of SageMaker allows for faster processing times if needed, simply by increasing the number of instances.

Sagemaker processing job portal

Conclusion

The power and efficiency of SageMaker geospatial capabilities have opened new doors for environmental monitoring, particularly in the realm of vegetation mapping. Through this example, we showcased how to process over 8,500 satellite images in less than 20 minutes. We not only demonstrated the technical feasibility, but also showcased the efficiency gains from using the cloud for environmental analysis. This approach illustrates a significant leap from traditional, resource-intensive methods to a more agile, scalable, and cost-effective approach. The flexibility to scale processing resources up or down as needed, combined with the ease of accessing and analyzing vast datasets, positions SageMaker as a transformative tool in the field of geospatial analysis. By simplifying the complexities associated with large-scale data processing, SageMaker enables scientists, researchers, and businesses stakeholders to focus more on deriving insights and less on infrastructure and data management.

As we look to the future, the integration of ML and geospatial analytics promises to further enhance our understanding of the planet’s ecological systems. The potential to monitor changes in real time, predict future trends, and respond with more informed decisions can significantly contribute to global conservation efforts. This example of vegetation mapping is just the beginning for running planetary-scale ML. See Amazon SageMaker geospatial capabilities to learn more.


About the Author

Xiong Zhou is a Senior Applied Scientist at AWS. He leads the science team for Amazon SageMaker geospatial capabilities. His current area of research includes LLM evaluation and data generation. In his spare time, he enjoys running, playing basketball and spending time with his family.

Anirudh Viswanathan is a Sr Product Manager, Technical – External Services with the SageMaker geospatial ML team. He holds a Masters in Robotics from Carnegie Mellon University, an MBA from the Wharton School of Business, and is named inventor on over 40 patents. He enjoys long-distance running, visiting art galleries and Broadway shows.

Janosch Woschitz is a Senior Solutions Architect at AWS, specializing in AI/ML. With over 15 years of experience, he supports customers globally in leveraging AI and ML for innovative solutions and building ML platforms on AWS. His expertise spans machine learning, data engineering, and scalable distributed systems, augmented by a strong background in software engineering and industry expertise in domains such as autonomous driving.

Li Erran Li is the applied science manager at humain-in-the-loop services, AWS AI, Amazon. His research interests are 3D deep learning, and vision and language representation learning. Previously he was a senior scientist at Alexa AI, the head of machine learning at Scale AI and the chief scientist at Pony.ai. Before that, he was with the perception team at Uber ATG and the machine learning platform team at Uber working on machine learning for autonomous driving, machine learning systems and strategic initiatives of AI. He started his career at Bell Labs and was adjunct professor at Columbia University. He co-taught tutorials at ICML’17 and ICCV’19, and co-organized several workshops at NeurIPS, ICML, CVPR, ICCV on machine learning for autonomous driving, 3D vision and robotics, machine learning systems and adversarial machine learning. He has a PhD in computer science at Cornell University. He is an ACM Fellow and IEEE Fellow.

Amit Modi is the product leader for SageMaker MLOps, ML Governance, and Responsible AI at AWS. With over a decade of B2B experience, he builds scalable products and teams that drive innovation and deliver value to customers globally.

Kris Efland is a visionary technology leader with a successful track record in driving product innovation and growth for over 20 years. Kris has helped create new products including consumer electronics and enterprise software across many industries, at both startups and large companies. In his current role at Amazon Web Services (AWS), Kris leads the Geospatial AI/ML category. He works at the forefront of Amazon’s fastest-growing ML service, Amazon SageMaker, which serves over 100,000 customers worldwide. He recently led the launch of Amazon SageMaker’s new geospatial capabilities, a powerful set of tools that allow data scientists and machine learning engineers to build, train, and deploy ML models using satellite imagery, maps, and location data. Before joining AWS, Kris was the Head of Autonomous Vehicle (AV) Tools and AV Maps for Lyft, where he led the company’s autonomous mapping efforts and toolchain used to build and operate Lyft’s fleet of autonomous vehicles. He also served as the Director of Engineering at HERE Technologies and Nokia and has co-founded several startups..

Read More

Unlocking insights and enhancing customer service: Intact’s transformative AI journey with AWS

Unlocking insights and enhancing customer service: Intact’s transformative AI journey with AWS

Intact Financial Corporation is the leading provider of property and casualty insurance in Canada, a leading provider of global specialty insurance, and a leader in commercial lines in the UK and Ireland. Intact faced a challenge in managing its vast network of customer support call centers and required a workable solution within 6 months and long-term solution within 1 year. With up to 20,000 calls per day, the manual auditing process was inefficient and struggled to keep up with increasing call traffic and rising customer service expectations. Quality control agents had to manually pick calls to audit, which was not a scalable solution. To address this, Intact turned to AI and speech-to-text technology to unlock insights from calls and improve customer service. The company developed an automated solution called Call Quality (CQ) using AI services from Amazon Web Services (AWS). The implementation of CQ allowed Intact to handle 1,500% more calls (15 times more calls per auditor), reduce agent handling time by 10%, and generate valuable insights about agent behavior, leading to improved customer service.

Amazon Transcribe is a fully managed automatic speech recognition (ASR) service that helps developers add speech-to-text capabilities to applications. It uses deep learning to convert audio to text quickly and accurately. In this post, we demonstrate how the CQ solution used Amazon Transcribe and other AWS services to improve critical KPIs with AI-powered contact center call auditing and analytics.

This allowed Intact to transcribe customer calls accurately, train custom language models, simplify the call auditing process, and extract valuable customer insights more efficiently.

Solution overview

Intact aimed to develop a cost-effective and efficient call analytics platform for their contact centers by using speech-to-text and machine learning technologies. The goal was to refine customer service scripts, provide coaching opportunities for agents, and improve call handling processes. By doing so, Intact hoped to improve agent efficiency, identify business opportunities, and analyze customer satisfaction, potential product issues, and training gaps. The following figure shows the architecture of the solution, which is described in the following sections.

Intact selected Amazon Transcribe as their speech-to-text AI solution for its accuracy in handling both English and Canadian French. This was a key factor in Intact’s decision, because the company sought a versatile platform capable of adapting to their diverse business needs. Amazon Transcribe offers deep learning capabilities, which can handle a wide range of speech and acoustic characteristics, in addition to its scalability to process anywhere from a few hundred to over tens of thousands of calls daily, also played a pivotal role. Additionally, Intact was impressed that Amazon Transcribe could adapt to various post-call analytics use cases across their organization.

Call processing and model serving

Intact has on-premises contact centers and cloud contact centers, so they built a call acquisition process to ingest calls from both sources. The architecture incorporates a fully automated workflow, powered by Amazon EventBridge, which triggers an AWS Step Functions workflow when an audio file is uploaded to a designated Amazon Simple Storage Service (Amazon S3) bucket. This serverless processing pipeline is built around Amazon Transcribe, which processes the call recordings and converts them from speech to text. Notifications of processed transcriptions are sent to an Amazon Simple Queue Service (Amazon SQS) queue, which aids in decoupling the architecture and resuming the Step Functions state machine workflow. AWS Lambda is used in this architecture as a transcription processor to store the processed transcriptions into an Amazon OpenSearch Service table.

The call processing workflow uses custom machine learning (ML) models built by Intact that run on Amazon Fargate and Amazon Elastic Compute Cloud (Amazon EC2). The transcriptions in OpenSearch are then further enriched with these custom ML models to perform components identification and provide valuable insights such as named entity recognition, speaker role identification, sentiment analysis, and personally identifiable information (PII) redaction. Regular improvements on existing and new models added valuable insights to be extracted such as reason for call, script adherence, call outcome, and sentiment analysis across various business departments from claims to personal lines. Amazon DynamoDB is used in this architecture to control the limits of the queues. The call transcriptions are then compressed from WAV to an MP3 format to optimize storage costs on Amazon S3.

Machine learning operations (MLOps)

Intact also built an automated MLOps pipeline that use Step Functions, Lambda, and Amazon S3. This pipeline provides self-serving capabilities for data scientists to track ML experiments and push new models to an S3 bucket. It offers flexibility for data scientists to conduct shadow deployments and capacity planning, enabling them to seamlessly switch between models for both production and experimentation purposes. Additionally, the application offers backend dashboards tailored to MLOps functionalities, ensuring smooth monitoring and optimization of machine learning models.

Frontend and API

The CQ application offers a robust search interface specially crafted for call quality agents, equipping them with powerful auditing capabilities for call analysis. The application’s backend is powered by Amazon OpenSearch Service for the search functionality. The application also uses Amazon Cognito to provide single sign-on for secure access. Lastly, Lambda functions are used for orchestration to fetch dynamic content from OpenSearch.

The application offers trend dashboards customized to deliver actionable business insights, aiding in identifying key areas where agents allocate their time. Using data from sources like Amazon S3 and Snowflake, Intact builds comprehensive business intelligence dashboards showcasing key performance metrics such as periods of silence and call handle time. This capability enables call quality agents to delve deeper into call components, facilitating targeted agent coaching opportunities.

Call Quality Trend Dashboard

The following figure is an example of the Call Quality Trend Dashboard, showing the information available to agents. This includes the ability to filter on multiple criteria including Dates and Languages, Average Handle Time per Components and Unit Managers, and Speech time vs. Silence Time.

Results

The implementation of the new system has led to a significant increase in efficiency and productivity. There has been a 1,500% increase in auditing speed and a 1,500% increase in the number of calls reviewed. Additionally, by building the MLOps on AWS alongside the CQ solution, the team has reduced the delivery of new ML models for providing analytics from days to mere hours, making auditors 65% more efficient. This has also resulted in a 10% reduction in agents’ time per call and a 10% reduction of average hold time as they receive targeted coaching to improve their customer conversations. This efficiency has allowed for more effective use of auditors’ time in devising coaching strategies, improving scripts, and agent training.

Additionally, the solution has provided intangible benefits such as extremely high availability with no major downtime since 2020 and high-cost predictability. The solution’s modular design has also led to robust deployments, which significantly reduced the time for new releases to less than an hour. This has also contributed to a near-zero failure rate during deployment.

Conclusion

In conclusion, Intact Financial Corporation’s implementation of the CQ, powered by AWS AI services has revolutionized their customer service approach. This case study serves as a testament to the transformative power of AI and speech-to-text technology in enhancing customer service efficiency and effectiveness. The solution’s design and capabilities position Intact well to use generative AI for future transcription projects. As next steps, Intact plans to further use this technology by processing calls using Amazon Transcribe streaming for real-time transcription and deploying a virtual agent to provide human agents with relevant information and recommended responses.

The journey of Intact Financial Corporation is one example of how embracing AI can lead to significant improvements in service delivery and customer satisfaction. For customers looking to quickly get started on their call analytics journey, explore Amazon Transcribe Call Analytics for live call analytics and agent assist and post call analytics.


About the Authors

Étienne Brouillard is an AWS AI Principal Architect at Intact Financial Corporation, Canada’s largest provider of property and casualty insurance.

Ami Dani is a Senior Technical Program Manager at AWS focusing on AI/ML services. During her career, she has focused on delivering transformative software development projects for the federal government and large companies in industries as diverse as advertising, entertainment, and finance. Ami has experience driving business growth, implementing innovative training programs and successfully managing complex, high-impact projects.

Prabir Sekhri is a Senior Solutions Architect at AWS in the enterprise financial services sector. During his career, he has focused on digital transformation projects within large companies in industries as diverse as finance, multimedia, telecommunications as well as the energy and gas sectors. His background includes DevOps, security, and designing and architecting enterprise storage solutions. Besides technology, Prabir has always been passionate about playing music. He leads a jazz ensemble in Montreal as a pianist, composer and arranger.

Read More

Accelerate migration portfolio assessment using Amazon Bedrock

Accelerate migration portfolio assessment using Amazon Bedrock

Conducting assessments on application portfolios that need to be migrated to the cloud can be a lengthy endeavor. Despite the existence of AWS Application Discovery Service or the presence of some form of configuration management database (CMDB), customers still face many challenges. These include time taken for follow-up discussions with application teams to review outputs and understand dependencies (approximately 2 hours per application), cycles needed to generate a cloud architecture design that meets security and compliance requirements, and the effort needed to provide cost estimates by selecting the right AWS services and configurations for optimal application performance in the cloud. Typically, it takes 6–8 weeks to carry out these tasks before actual application migrations begin.

In this blog post, we will harness the power of generative AI and Amazon Bedrock to help organizations simplify, accelerate, and scale migration assessments. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon through a single API, along with a broad set of capabilities you need to build generative AI applications with security, privacy, and responsible AI. By using Amazon Bedrock Agents, action groups, and Amazon Bedrock Knowledge Bases, we demonstrate how to build a migration assistant application that rapidly generates migration plans, R-dispositions, and cost estimates for applications migrating to AWS. This approach enables you to scale your application portfolio discovery and significantly accelerate your planning phase.

General requirements for a migration assistant

The following are some key requirements that you should consider when building a migration assistant.

Accuracy and consistency

Is your migration assistant application able to render accurate and consistent responses?

Guidance: To ensure accurate and consistent responses from your migration assistant, implement Amazon Bedrock Knowledge Bases. The knowledge base should contain contextual information based on your company’s private data sources. This enables the migration assistant to use Retrieval-Augmented Generation (RAG), which enhances the accuracy and consistency of responses. Your knowledge base should comprise multiple data sources, including:

Handle hallucinations

How are you reducing the hallucinations from the large language model (LLM) for your migration assistant application?

Guidance: Reducing hallucinations in LLMs involves implementation of several key strategies. Implement customized prompts based on your requirements and incorporate advanced prompting techniques to guide the model’s reasoning and provide examples for more accurate responses. These techniques include chain-of-thought prompting, zero-shot prompting, multishot prompting, few-shot prompting, and model-specific prompt engineering guidelines (see Anthropic Claude on Amazon Bedrock prompt engineering guidelines). RAG combines information retrieval with generative capabilities to enhance contextual relevance and reduce hallucinations. Finally, a feedback loop or human-in-the-loop when fine-tuning LLMs on specific datasets will help align the responses with accurate and relevant information, mitigating errors and outdated content.

Modular design

Is the design of your migration assistant modular?

Guidance: Building a migration assistant application using Amazon Bedrock action groups, which have a modular design, offers three key benefits.

  • Customization and adaptability: Action groups allow users to customize migration workflows to suit specific AWS environments and requirements. For instance, if a user is migrating a web application to AWS, they can customize the migration workflow to include specific actions tailored to web server setup, database migration, and network configuration. This customization ensures that the migration process aligns with the unique needs of the application being migrated.
  • Maintenance and troubleshooting: Simplifies maintenance and troubleshooting tasks by isolating issues to individual components. For example, if there’s an issue with the database migration action within the migration workflow, it can be addressed independently without affecting other components. This isolation streamlines the troubleshooting process and minimizes the impact on the overall migration operation, ensuring a smoother migration and faster resolution of issues.
  • Scalability and reusability: Promote scalability and reusability across different AWS migration projects. For instance, if a user successfully migrates an application to AWS using a set of modular action groups, they can reuse those same action groups to migrate other applications with similar requirements. This reusability saves time and effort when developing new migration workflows and ensures consistency across multiple migration projects. Additionally, modular design facilitates scalability by allowing users to scale the migration operation up or down based on workload demands. For example, if they need to migrate a larger application with higher resource requirements, they can easily scale up the migration workflow by adding more instances of relevant action groups, without needing to redesign the entire workflow from scratch.

Overview of solution

Before we dive deep into the deployment, let’s walk through the key steps of the architecture that will be established, as shown in Figure 1.

  1. Users interact with the migration assistant through the Amazon Bedrock chat console to input their requests. For example, a user might request to Generate R-disposition with cost estimates or Generate Migration plan for specific application IDs (for example, A1-CRM or A2-CMDB).
  2. The migration assistant, which uses Amazon Bedrock agents, is configured with instructions, action groups, and knowledge bases. When processing the user’s request, the migration assistant invokes relevant action groups such as R Dispositions and Migration Plan, which in turn invoke specific AWS Lambda
  3. The Lambda functions process the request using RAG to produce the required output.
  4. The resulting output documents (R-Dispositions with cost estimates and Migration Plan) are then uploaded to a designated Amazon Simple Storage Service (Amazon S3)

The following image is a screenshot of a sample user interaction with the migration assistant.

Prerequisites

You should have the following:

Deployment steps

  1. Configure a knowledge base:
    • Open the AWS Management Console for Amazon Bedrock and navigate to Amazon Bedrock Knowledge Bases.
    • Choose Create knowledge base and enter a name and optional description.
    • Select the vector database (for example, Amazon OpenSearch Serverless).
    • Select the embedding model (for example, Amazon Titan Embedding G1 – Text).
    • Add data sources:
      • For Amazon S3: Specify the S3 bucket and prefix, file types, and chunking configuration.
      • For custom data: Use the API to ingest data programmatically.
    • Review and create the knowledge base.
  2. Set up Amazon Bedrock Agents:
    • In the Amazon Bedrock console, go to the Agents section and chose Create agent.
    • Enter a name and optional description for the agent.
    • Select the foundation model (for example, Anthropic Claude V3).
    • Configure the agent’s AWS Identity and Access Management (IAM) role to grant necessary permissions.
    • Add instructions to guide the agent’s behavior.
    • Optionally, add the previously created Amazon Bedrock Knowledge Base to enhance the agent’s responses.
    • Configure additional settings such as maximum tokens and temperature.
    • Review and create the agent.
  3. Configure actions groups for the agent:
    • On the agent’s configuration page, navigate to the Action groups
    • Choose Add action group for each required group (for example, Create R-disposition Assessment and Create Migration Plan).
    • For each action group:
    • After adding all action groups, review the entire agent configuration and deploy the agent.

Clean up

To avoid unnecessary charges, delete the resources created during testing. Use the following steps to clean up the resources:

  1. Delete the Amazon Bedrock knowledge base: Open the Amazon Bedrock console.
    Delete the knowledge base from any agents that it’s associated with.

    • From the left navigation pane, choose Agents.
    • Select the Name of the agent that you want to delete the knowledge base from.
    • A red banner appears to warn you to delete the reference to the knowledge base, which no longer exists, from the agent.
    • Select the radio button next to the knowledge base that you want to remove. Choose More and then choose Delete.
    • From the left navigation pane, choose Knowledge base.
    • To delete a source, either choose the radio button next to the source and select Delete or select the Name of the source and then choose Delete in the top right corner of the details page.
    • Review the warnings for deleting a knowledge base. If you accept these conditions, enter delete in the input box and choose Delete to confirm.
  2. Delete the Agent
    • In the Amazon Bedrock console, choose Agents from the left navigation pane.
    • Select the radio button next to the agent to delete.
    • A modal appears warning you about the consequences of deletion. Enter delete in the input box and choose Delete to confirm.
    • A blue banner appears to inform you that the agent is being deleted. When deletion is complete, a green success banner appears.
  3. Delete all the other resources including the Lambda functions and any AWS services used for account customization.

Conclusion

Conducting assessments on application portfolios for AWS cloud migration can be a time-consuming process, involving analyzing data from various sources, discovery and design discussions to develop an AWS Cloud architecture design, and cost estimates.

In this blog post, we demonstrated how you can simplify, accelerate, and scale migration assessments by using generative AI and Amazon Bedrock. We showcased using Amazon Bedrock Agents, action groups, and Amazon Bedrock Knowledge Bases for a migration assistant application that renders migration plans, R-dispositions, and cost estimates. This approach significantly reduces the time and effort required for portfolio assessments, helping organizations to scale and expedite their journey to the AWS Cloud.

Ready to improve your cloud migration process with generative AI in Amazon Bedrock? Begin by exploring the Amazon Bedrock User Guide to understand how it can streamline your organization’s cloud journey. For further assistance and expertise, consider using AWS Professional Services (contact sales) to help you streamline your cloud migration journey and maximize the benefits of Amazon Bedrock.


About the Authors

Ebbey Thomas is a Senior Cloud Architect at AWS, with a strong focus on leveraging generative AI to enhance cloud infrastructure automation and accelerate migrations. In his role at AWS Professional Services, Ebbey designs and implements solutions that improve cloud adoption speed and efficiency while ensuring secure and scalable operations for AWS users. He is known for solving complex cloud challenges and driving tangible results for clients. Ebbey holds a BS in Computer Engineering and an MS in Information Systems from Syracuse University.

Shiva Vaidyanathan is a Principal Cloud Architect at AWS. He provides technical guidance, design and lead implementation projects to customers ensuring their success on AWS. He works towards making cloud networking simpler for everyone. Prior to joining AWS, he has worked on several NSF funded research initiatives on performing secure computing in public cloud infrastructures. He holds a MS in Computer Science from Rutgers University and a MS in Electrical Engineering from New York University.

Read More

Improve public speaking skills using a generative AI-based virtual assistant with Amazon Bedrock

Improve public speaking skills using a generative AI-based virtual assistant with Amazon Bedrock

Public speaking is a critical skill in today’s world, whether it’s for professional presentations, academic settings, or personal growth. By practicing it regularly, individuals can build confidence, manage anxiety in a healthy way, and develop effective communication skills leading to successful public speaking engagements. Now, with the advent of large language models (LLMs), you can use generative AI-powered virtual assistants to provide real-time analysis of speech, identification of areas for improvement, and suggestions for enhancing speech delivery.

In this post, we present an Amazon Bedrock powered virtual assistant that can transcribe presentation audio and examine it for language use, grammatical errors, filler words, and repetition of words and sentences to provide recommendations as well as suggest a curated version of the speech to elevate the presentation. This solution helps refine communication skills and empower individuals to become more effective and impactful public speakers. Organizations across various sectors, including corporations, educational institutions, government entities, and social media personalities, can use this solution to provide automated coaching for their employees, students, and public speaking engagements.

In the following sections, we walk you through constructing a scalable, serverless, end-to-end Public Speaking Mentor AI Assistant with Amazon Bedrock, Amazon Transcribe, and AWS Step Functions using provided sample code. Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, Stability AI, and Amazon through a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Overview of solution

The solution consists of four main components:

  • An Amazon Cognito user pool for user authentication. Authenticated users are granted access to the Public Speaking Mentor AI Assistant web portal to upload audio and video recordings.
  • A simple web portal created using Streamlit to upload audio and video recordings. The uploaded files are stored in an Amazon Simple Storage Service (Amazon S3) bucket for later processing, retrieval, and analysis.
  • A Step Functions standard workflow to orchestrate converting the audio to text using Amazon Transcribe and then invoking Amazon Bedrock with AI prompt chaining to generate speech recommendations and rewrite suggestions.
  • Amazon Simple Notification Service (Amazon SNS) to send an email notification to the user with Amazon Bedrock generated recommendations.

This solution uses Amazon Transcribe for speech-to-text conversion. When an audio or video file is uploaded, Amazon Transcribe transcribes the speech into text. This text is passed as an input to Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock. The solution sends two prompts to Amazon Bedrock: one to generate feedback and recommendations on language usage, grammar, filler words, repetition, and more, and another to obtain a curated version of the original speech. Prompt chaining is performed with Amazon Bedrock for these prompts. The solution then consolidates the outputs, displays recommendations on the user’s webpage, and emails the results.

The generative AI capabilities of Amazon Bedrock efficiently process user speech inputs. It uses natural language processing to analyze the speech and provides tailored recommendations. Using LLMs trained on extensive data, Amazon Bedrock generates curated speech outputs to enhance the presentation delivery.

The following diagram shows our solution architecture.

Scope of solution

Let’s explore the architecture step by step:

  1. The user authenticates to the Public Speaking Mentor AI Assistant web portal (a Streamlit application hosted on user’s local desktop) using the Amazon Cognito user pool authentication mechanism.
  2. The user uploads an audio or video file to the web portal, which is stored in an S3 bucket encrypted using server-side encryption with Amazon S3 managed keys (SSE-S3).
  3. The S3 service triggers an s3:ObjectCreated event for each file that is saved to the bucket.
  4. Amazon EventBridge invokes the Step Functions state machine based on this event. Because the state machine execution could exceed 5 minutes, we use a standard workflow. Step Functions state machine logs are sent to Amazon CloudWatch for logging and troubleshooting purposes.
  5. The Step Functions workflow uses AWS SDK integrations to invoke Amazon Transcribe and initiates a StartTranscriptionJob, passing the S3 bucket, prefix path, and object name in the MediaFileUri The workflow waits for the transcription job to complete and saves the transcript in another S3 bucket prefix path.
  6. The Step Functions workflow uses the optimized integrations to invoke the Amazon Bedrock InvokeModel API, which specifies the Anthropic Claude 3.5 Sonnet model, the system prompt, maximum tokens, and the transcribed speech text as inputs to the API. The system prompt instructs the Anthropic Claude 3.5 Sonnet model to provide suggestions on how to improve the speech by identifying incorrect grammar, repetitions of words or content, use of filler words, and other recommendations.
  7. After receiving a response from Amazon Bedrock, the Step Functions workflow uses prompt chaining to craft another input for Amazon Bedrock, incorporating the previous transcribed speech and the model’s previous response, and requesting the model to provide suggestions for rewriting the speech.
  8. The workflow combines these outputs from Amazon Bedrock and crafts a message that is displayed on the logged-in user’s webpage.
  9. The Step Functions workflow invokes the Amazon SNS Publish optimized integration to send an email to the user with the Amazon Bedrock generated message.
  10. The Streamlit application queries Step Functions to display output results on the Amazon Cognito user’s webpage.

Prerequisites

For implementing the Public Speaking Mentor AI Assistant solution, you should have the following prerequisites:

  1. An AWS account with sufficient AWS Identity and Access Management (IAM) permissions for the following AWS services to deploy the solution and run the Streamlit application web portal:
    • Amazon Bedrock
    • AWS CloudFormation
    • Amazon CloudWatch
    • Amazon Cognito
    • Amazon EventBridge
    • Amazon Transcribe
    • Amazon SNS
    • Amazon S3
    • AWS Step Functions
  1. Model access enabled for Anthropic’s Claude 3.5 Sonnet on Amazon Bedrock in your desired AWS Region.
  2. A local desktop environment with the AWS Command Line Interface (AWS CLI) installed, Python 3.8 or above, and the AWS Cloud Development Kit (AWS CDK) for Python and Git installed.
  3. The AWS CLI set up with necessary AWS credentials and desired Region.

Deploy the Public Speaking Mentor AI Assistant solution

Complete the following steps to deploy the Public Speaking Mentor AI Assistant AWS infrastructure:

  1. Clone the repository to your local desktop environment with the following command:
    git clone https://github.com/aws-samples/improve_public_speaking_skills_using_a_genai_based_virtual_assistant_with_amazon_bedrock.git

  2. Change to the app directory in the cloned repository:
    cd improve_public_speaking_skills_using_a_genai_based_virtual_assistant_with_amazon_bedrock/app

  3. Create a Python virtual environment:
    python3 -m venv .venv

  4. Activate your virtual environment:
    source .venv/bin/activate

  5. Install the required dependencies:
    pip install -r requirements.txt

  6. Optionally, synthesize the CloudFormation template using the AWS CDK:
    cdk synth

You may need to perform a one-time AWS CDK bootstrapping using the following command. See AWS CDK bootstrapping for more details.

cdk bootstrap aws://<ACCOUNT-NUMBER-1>/<REGION-1>
  1. Deploy the CloudFormation template in your AWS account and selected Region:
    cdk deploy

After the AWS CDK is deployed successfully, you can follow the steps in the next section to create an Amazon Cognito user.

Create an Amazon Cognito user for authentication

Complete the following steps to create a user in the Amazon Cognito user pool to access the web portal. The user created doesn’t need AWS permissions.

  1. Sign in to the AWS Management Console of your account and select the Region for your deployment.
  2. On the Amazon Cognito console, choose User pools in the navigation pane.
  3. Choose the user pool created by the CloudFormation template. (The user pool name should have the prefix PSMBUserPool followed by a string of random characters as one word.)
  4. Choose Create user.

Cognito Create User

  1. Enter a user name and password, then choose Create user.

Cognito User Information

Subscribe to an SNS topic for email notifications

Complete the following steps to subscribe to an SNS topic to receive speech recommendation email notifications:

  1. Sign in to the console of your account and select the Region for your deployment.
  2. On the Amazon SNS console, choose Topics in the navigation pane.
  3. Choose the topic created by the CloudFormation template. (The name of the topic should look like InfraStack-PublicSpeakingMentorAIAssistantTopic followed by a string of random characters as one word.)
  4. Choose Create subscription.

SNS Create Subscription

  1. For Protocol, choose Email.
  2. For Endpoint, enter your email address.
  3. Choose Create subscription.

SNS Subscription Information

Run the Streamlit application to access the web portal

Complete the following steps to run the Streamlit application to access the Public Speaking Mentor AI Assistant web portal:

  1. Change the directory to webapp inside the app directory:
    cd webapp

  2. Launch the Streamlit server on port 8080:
    streamlit run webapp.py --server.port 8080

  3. Make note of the Streamlit application URL for further use. Depending on your environment setup, you could choose one of the URLs out of three (Local, Network, or External) provided by Streamlit server’s running process.
  1. Make sure incoming traffic on port 8080 is allowed on your local machine to access the Streamlit application URL.

Use the Public Speaking Mentor AI Assistant

Complete the following steps to use the Public Speaking Mentor AI Assistant to improve your speech:

  1. Open the Streamlit application URL in your browser (Google Chrome, preferably) that you noted in the previous steps.
  2. Log in to the web portal using the Amazon Cognito user name and password created earlier for authentication.

Public Speaking Mentor AI Assistant Login Page

  1. Choose Browse files to locate and choose your recording.
  2. Choose Upload File to upload your file to an S3 bucket.

Public Speaking Mentor AI Assistant Upload File

As soon as the file upload finishes, the Public Speaking Mentor AI Assistant processes the audio transcription and prompt engineering steps to generate speech recommendations and rewrite results.

Public Speaking Mentor AI Assistant Processing

When the processing is complete, you can see the Speech Recommendations and Speech Rewrite sections on the webpage as well as in your email through Amazon SNS notifications.

On the right pane of the webpage, you can review the processing steps performed by the Public Speaking Mentor AI Assistant solution to get your speech results.

Public Speaking Mentor AI Assistant Results Page

Clean up

Complete the following steps to clean up your resources:

  1. Shut down your Streamlit application server process running in your environment using Ctrl+C.
  2. Change to the app directory in your repository.
  3. Destroy the resources created with AWS CloudFormation using the AWS CDK:
    cdk destroy

Optimize for functionality, accuracy, and cost

Let’s conduct an analysis of this proposed solution architecture to identify opportunities for functionality enhancements, accuracy improvements, and cost optimization.

Starting with prompt engineering, our approach involves analyzing users’ speech based on several criteria, such as language usage, grammatical errors, filler words, and repetition of words and sentences. Individuals and organizations have the flexibility to customize the prompt by including additional analysis parameters or adjusting existing ones to align with their requirements and company policies. Furthermore, you can set the inference parameters to control the response from the LLM deployed on Amazon Bedrock.

To create a lean architecture, we have primarily chosen serverless technologies, such as Amazon Bedrock for prompt engineering and natural language generation, Amazon Transcribe for speech-to-text conversion, Amazon S3 for storage, Step Functions for orchestration, EventBridge for scalable event handling to process audio files, and Amazon SNS for email notifications. Serverless technologies enable you to run the solution without provisioning or managing servers, allowing for automatic scaling and pay-per-use billing, which can lead to cost savings and increased agility.

For the web portal component, we are currently deploying the Streamlit application in a local desktop environment. Alternatively, you have the option to use Amazon S3 Website Hosting, which would further contribute to a serverless architecture.

To enhance the accuracy of audio-to-text translation, it’s recommended to record your presentation audio in a quiet environment, away from noise and distractions.

In cases where your media contains domain-specific or non-standard terms, such as brand names, acronyms, and technical words, Amazon Transcribe might not accurately capture these terms in your transcription output. To address transcription inaccuracies and customize your output for your specific use case, you can create custom vocabularies and custom language models.

At the time of writing, our solution analyzes only the audio component. Uploading audio files alone can optimize storage costs. You may consider converting your video files into audio using third-party tools prior to uploading them to the Public Speaking Mentor AI Assistant web portal.

Our solution currently uses the standard tier of Amazon S3. However, you have the option to choose the S3 One Zone-IA storage class for storing files that don’t require high availability. Additionally, configuring an Amazon S3 lifecycle policy can further help reduce costs.

You can configure Amazon SNS to send speech recommendations to other destinations, such as email, webhook, and Slack. Refer to Configure Amazon SNS to send messages for alerts to other destinations for more information.

To estimate the cost of implementing the solution, you can use the AWS Pricing Calculator. For larger workloads, additional volume discounts may be available. We recommend contacting AWS pricing specialists or your account manager for more detailed pricing information.

Security best practices

Security and compliance is a shared responsibility between AWS and the customer, as outlined in the Shared Responsibility Model. We encourage you to review this model for a comprehensive understanding of the respective responsibilities. Refer to Security in Amazon Bedrock and Build generative AI applications on Amazon Bedrock to learn more about building secure, compliant, and responsible generative AI applications on Amazon Bedrock. OWASP Top 10 For LLMs outlines the most common vulnerabilities. We encourage you to enable Amazon Bedrock Guardrails to implement safeguards for your generative AI applications based on your use cases and responsible AI policies.

With AWS, you manage the privacy controls of your data, control how your data is used, who has access to it, and how it is encrypted. Refer to Data Protection in Amazon Bedrock and Data Protection in Amazon Transcribe for more information. Similarly, we strongly recommend referring to the data protection guidelines for each AWS service used in our solution architecture. Furthermore, we advise applying the principle of least privilege when granting permissions, because this practice enhances the overall security of your implementation.

Conclusion

By harnessing the capabilities of LLMs in Amazon Bedrock, our Public Speaking Mentor AI Assistant offers a revolutionary approach to enhancing public speaking abilities. With its personalized feedback and constructive recommendations, individuals can develop effective communication skills in a supportive and non-judgmental environment.

Unlock your potential as a captivating public speaker. Embrace the power of our Public Speaking Mentor AI Assistant and embark on a transformative journey towards mastering the art of public speaking. Try out our solution today by cloning the GitHub repository and experience the difference our cutting-edge technology can make in your personal and professional growth.


About the Authors

Nehal Sangoi is a Sr. Technical Account Manager at Amazon Web Services. She provides strategic technical guidance to help independent software vendors plan and build solutions using AWS best practices. Connect with Nehal on LinkedIn.

Akshay Singhal is a Sr. Technical Account Manager at Amazon Web Services supporting Enterprise Support customers focusing on the Security ISV segment. He provides technical guidance for customers to implement AWS solutions, with expertise spanning serverless architectures and cost optimization. Outside of work, Akshay enjoys traveling, Formula 1, making short movies, and exploring new cuisines. Connect with him on LinkedIn.

Read More

Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast are now available in Amazon SageMaker JumpStart

Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast are now available in Amazon SageMaker JumpStart

This post is co-written with Bar Fingerman from Bria.

We are thrilled to announce that Bria 2.3, 2.2 HD, and 2.3 Fast text-to-image foundation models (FMs) from Bria AI are now available in Amazon SageMaker JumpStart. Bria models are trained exclusively on commercial-grade licensed data, providing high standards of safety and compliance with full legal indemnity.

These advanced models from Bria AI generate high-quality and contextually relevant visual content that is ready to use in marketing, design, and image generation use cases across industries from ecommerce, media and entertainment, and gaming to consumer-packaged goods and retail.

In this post, we discuss Bria’s family of models, explain the Amazon SageMaker platform, and walk through how to discover, deploy, and run inference on a Bria 2.3 model using SageMaker JumpStart.

Overview of Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast

Bria AI offers a family of high-quality visual content models. These advanced models represent the cutting edge of generative AI technology for image creation:

  • Bria 2.3 – The core model delivers high-quality visual content with exceptional photorealism and detail, capable of generating stunning images with complex concepts in various art styles, including photorealism.
  • Bria 2.2 HD – Optimized for high-definition, Bria 2.2 HD offers high-definition visual content that meets the demanding needs of high-resolution applications, making sure every detail is crisp and clear.
  • Bria 2.3 Fast – Optimized for speed, Bria 2.3 Fast generates high-quality visuals at a faster rate, perfect for applications requiring quick turnaround times without compromising on quality. Using the model on SageMaker g5 instance types gives fast latency and throughput (compared to Bria 2.3 and Bria 2.2 HD), and the p4d instance type provides twice the latency from the g5 instance.

Overview of SageMaker JumpStart

With SageMaker JumpStart, you can choose from a broad selection of publicly available FMs. ML practitioners can deploy FMs to dedicated SageMaker instances from a network-isolated environment and customize models using SageMaker for model training and deployment. You can now discover and deploy Bria models in Amazon SageMaker Studio or programmatically through the SageMaker Python SDK. Doing so enables you to derive model performance and machine learning operations (MLOps) controls with SageMaker features such as Amazon SageMaker Pipelines, Amazon SageMaker Debugger, or container logs.

The model is deployed in an AWS secure environment and under your virtual private cloud (VPC) controls, helping provide data security. Bria models are available today for deployment and inferencing in SageMaker Studio in 22 AWS Regions where SageMaker JumpStart is available. Bria models will require g5 and p4 instances.

Prerequisites

To try out the Bria models using SageMaker JumpStart, you need the following prerequisites:

Discover Bria models in SageMaker JumpStart

You can access the FMs through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we show how to discover the models in SageMaker Studio.

SageMaker Studio is an IDE that provides a single web-based visual interface where you can access purpose-built tools to perform all ML development steps, from preparing data to building, training, and deploying your ML models. For more details on how to get started and set up SageMaker Studio, refer to Amazon SageMaker Studio.

In SageMaker Studio, you can access SageMaker JumpStart by choosing JumpStart in the navigation pane or by choosing JumpStart on the Home page.

On the SageMaker JumpStart landing page, you can find pre-trained models from popular model hubs. You can search for Bria, and the search results will list all the Bria model variants available. For this post, we use the Bria 2.3 Commercial Text-to-image model.

You can choose the model card to view details about the model such as license, data used to train, and how to use the model. You also have two options, Deploy and Preview notebooks, to deploy the model and create an endpoint.

Subscribe to Bria models in AWS Marketplace

When you choose Deploy, if the model wasn’t already subscribed, you first have to subscribe before you can deploy the model. We demonstrate the subscription process for the Bria 2.3 Commercial Text-to-image model. You can repeat the same steps for subscribing to other Bria models.

After you choose Subscribe, you’re redirected to the model overview page, where you can read the model details, pricing, usage, and other information. Choose Continue to Subscribe and accept the offer on the following page to complete the subscription.

Configure and deploy Bria models using AWS Marketplace

The configuration page gives three different launch methods to choose from. For this post, we showcase how you can use SageMaker console:

  1. For Available launch method, select SageMaker console.
  2. For Region, choose your preferred Region.
  3. Choose View in Amazon SageMaker.
  4. For Model name, enter a name (for example, Model-Bria-v2-3).
  5. For IAM role, choose an existing IAM role or create a new role that has the SageMaker full access IAM policy attached.
  6. Choose Next.The recommended instance types for this model endpoint are ml.g5.2xlarge, ml.g5.12xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, and ml.p4de.24xlarge. Make sure you have the account-level service limit for one or more of these instance types to deploy this model. For more information, refer to Requesting a quota increase.
  7. In the Variants section, select any of the recommended instance types provided by Bria (for example, ml.g5.2xlarge).
  8. Choose Create endpoint configuration.

    A success message should appear after the endpoint configuration is successfully created.
  9. Choose Next to create an endpoint.
  10. In the Create endpoint section, enter the endpoint name (for example, Endpoint-Bria-v2-3-Model) and choose Submit.After you successfully create the endpoint, it’s displayed on the SageMaker endpoints page on the SageMaker console.

Configure and deploy Bria models using SageMaker JumpStart

If the Bria models are already subscribed in AWS Marketplace, you can choose Deploy in the model card page to configure the endpoint.

On the endpoint configuration page, SageMaker pre-populates the endpoint name, recommended instance type, instance count, and other details for you. You can modify them based on your requirements and then choose Deploy to create an endpoint.

After you successfully create the endpoint, the status will show as In service.

Run inference in SageMaker Studio

You can test the endpoint by passing a sample inference request payload in SageMaker Studio, or you can use SageMaker notebook. In this section, we demonstrate using SageMaker Studio:

  1. In SageMaker Studio, in the navigation pane, choose Endpoints under Deployments.
  2. Choose the Bria endpoint you just created.
  3. On the Test inference tab, test the endpoint by sending a sample request.
    You can see the response on the same page, as shown in the following screenshot.

Text-to-image generation using a SageMaker notebook

You can also use a SageMaker notebook to run inference against the deployed endpoint using the SageMaker Python SDK.

The following code initiates the endpoint you created using SageMaker JumpStart:

from sagemaker.predictor import Predictor
from sagemaker.serializers import JSONSerializer
from sagemaker.deserializers import JSONDeserializer

# Use the existing endpoint name
endpoint_name = "XXXXXXXX"  # Replace with your endpoint name

# Create a SageMaker predictor object
bria_predictor = Predictor(
    endpoint_name=endpoint_name,
    serializer=JSONSerializer(),
    deserializer=JSONDeserializer(),
)

bria_predictor.endpoint_name

The model responses are in base64 encoded format. The following function helps decode the base64 encoded image and displays it as an image:

import base64
from PIL import Image
import io

def display_base64_image(base64_string):
    image_bytes = base64.b64decode(base64_string)
    image_stream = io.BytesIO(image_bytes)
    image = Image.open(image_stream)

    # Display the image
    image.show()

The following is a sample payload with a text prompt to generate an image using the Bria model:

payload = {
  "prompt": "a baby riding a bicycle in a field of flowers",
  "num_results": 1,
  "sync": True
}

response = bria_predictor.predict(payload)
artifacts = response['artifacts'][0]

encoded_image = artifacts['image_base64']

display_base64_image(encoded_image)

Example prompts

You can interact with the Bria 2.3 text-to-image model like any standard image generation model, where the model processes an input sequence and outputs response. In this section, we provide some example prompts and sample output.

We use the following prompts:

  • Photography, dynamic, in the city, professional mail skateboarder, sunglasses, teal and orange hue
  • Young woman with flowing curly hair stands on a subway platform, illuminated by the vibrant lights of a speeding train, purple and cyan colors
  • Close up of vibrant blue and green parrot perched on a wooden branch inside a cozy, well-lit room
  • Light speed motion with blue and purple neon colors and building in the background

The model generates the following images.

The following is an example prompt for generating an image using the preceding text prompt:

payload = {
"prompt": "Photography, dynamic, in the city, professional mail skateboarder, sunglasses, teal and orange hue",
"num_results": 1,
"sync": True
}

response = bria_predictor.predict(payload)
artifacts = response['artifacts'][0]

encoded_image = artifacts['image_base64']

display_base64_image(encoded_image)

Clean up

After you’re done running the notebook, delete all resources that you created in the process so your billing is stopped. Use the following code:

predictor.delete_model()
predictor.delete_endpoint()

Conclusion

With the availability of Bria 2.3, 2.2 HD, and 2.3 Fast in SageMaker JumpStart and AWS Marketplace, enterprises can now use advanced generative AI capabilities to enhance their visual content creation processes. These models provide a balance of quality, speed, and compliance, making them an invaluable asset for any organization looking to stay ahead in the competitive landscape.

Bria’s commitment to responsible AI and the robust security framework of SageMaker provide enterprises with the full package for data privacy, regulatory compliance, and responsible AI models for commercial use. In addition, the integrated experience takes advantage of the capabilities of both platforms to simplify MLOps, data storage, and real-time processing.

For more information about using FMs in SageMaker JumpStart, refer to Train, deploy, and evaluate pretrained models with SageMaker JumpStart, JumpStart Foundation Models, and Getting started with Amazon SageMaker JumpStart.

Explore Bria models in SageMaker JumpStart today and revolutionize your visual content creation process!


About the Authors

Bar FingermanBar Fingerman is the Head of AI/ML Engineering at Bria. He leads the development and optimization of core infrastructure, enabling the company to scale cutting-edge generative AI technologies. With a focus on designing high-performance supercomputers for large-scale AI training, Bar leads the engineering group in deploying, managing, and securing scalable AI/ML cloud solutions. He works closely with leadership and cross-functional teams to align business goals while driving innovation and cost-efficiency.

Supriya Puragundla is a Senior Solutions Architect at AWS. She has over 15 years of IT experience in software development, design, and architecture. She helps key customer accounts on their data, generative AI, and AI/ML journeys. She is passionate about data-driven AI and the area of depth in ML and generative AI.

Rodrigo Merino is a Generative AI Solutions Architect Manager at AWS. With over a decade of experience deploying emerging technologies, ranging from generative AI to IoT, Rodrigo guides customers across various industries to accelerate their AI/ML and generative AI journeys. He specializes in helping organizations train and build models on AWS, as well as operationalize end-to-end ML solutions. Rodrigo’s expertise lies in bridging the gap between cutting-edge technology and practical business applications, enabling companies to harness the full potential of AI and drive innovation in their respective fields.

Eliad Maimon is a Senior Startup Solutions Architect at AWS, focusing on generative AI startups. He helps startups accelerate and scale their AI/ML journeys by guiding them through deep-learning model training and deployment on AWS. With a passion for AI and entrepreneurship, Eliad is committed to driving innovation and growth in the startup ecosystem.

Read More

Introducing SageMaker Core: A new object-oriented Python SDK for Amazon SageMaker

Introducing SageMaker Core: A new object-oriented Python SDK for Amazon SageMaker

We’re excited to announce the release of SageMaker Core, a new Python SDK from Amazon SageMaker designed to offer an object-oriented approach for managing the machine learning (ML) lifecycle. This new SDK streamlines data processing, training, and inference and features resource chaining, intelligent defaults, and enhanced logging capabilities. With SageMaker Core, managing ML workloads on SageMaker becomes simpler and more efficient. The SageMaker Core SDK comes bundled as part of the SageMaker Python SDK version 2.231.0 and above.

In this post, we show how the SageMaker Core SDK simplifies the developer experience while providing API for seamlessly executing various steps in a general ML lifecycle. We also discuss the main benefits of using this SDK along with sharing relevant resources to learn more about this SDK.

Traditionally, developers have had two options when working with SageMaker: the  AWS SDK for Python, also known as boto3, or the SageMaker Python SDK. Although both provide comprehensive APIs for ML lifecycle management, they often rely on loosely typed constructs such as hard-coded constants and JSON dictionaries, mimicking a REST interface. For instance, to create a training job, Boto3 offers a create_training_job API, but retrieving job details requires the describe_training_job API.

While using boto3, developers face the challenge of remembering and crafting lengthy JSON dictionaries, ensuring that all keys are accurately placed. Let’s take a closer look at the create_training_job method from boto3:

response = client.create_training_job(
    TrainingJobName='string',
    HyperParameters={
        'string': 'string'
    },
    AlgorithmSpecification={
            .
            .
            .
    },
    RoleArn='string',
    InputDataConfig=[
        {
            .
            .
            .
        },
    ],
    OutputDataConfig={
            .
            .
            .
    },
    ResourceConfig={
            .
            .
            .    
    },
    VpcConfig={
            .
            .
            .
    },
    .
    .
    .
    .
# All arguments/fields are not shown for brevity purposes.

)

If we observe carefully, for arguments such as AlgorithmSpecification, InputDataConfig, OutputDataConfig, ResourceConfig, or VpcConfig, we need to write verbose JSON dictionaries. Because it contains many string variables in a long dictionary field, it’s very easy to have a typo somewhere or a missing key. There is no type checking possible, and as for the compiler, it’s just a string.
Similarly in SageMaker Python SDK, it requires us to create an estimator object and invoke the fit() method on it. Although these constructs work well, they aren’t intuitive to the developer experience. It’s hard for developers to map the meaning of an estimator to something that can be used to train a model.

Introducing SageMaker Core SDK

SageMaker Core SDK offers to solve this problem by replacing such long dictionaries with object-oriented interfaces, so developers can work with object-oriented abstractions, and SageMaker Core will take care of converting those objects to dictionaries and executing the actions on the developer’s behalf.

The following are the key features of SageMaker Core:

  • Object-oriented interface – It provides object-oriented classes for tasks such as processing, training, or deployment. Providing such interface can enforce strong type checking, make the code more manageable and promote reusability. Developers can benefit from all features of object-oriented programming.
  • Resource chaining – Developers can seamlessly pass SageMaker resources as objects by supplying them as arguments to different resources. For example, we can create a model object and pass that model object as an argument while setting up the endpoint. In contrast, while using Boto3, we need to supply ModelName as a string argument.
  • Abstraction of low-level details – It automatically handles resource state transitions and polling logics, freeing developers from managing these intricacies and allowing them to focus on higher value tasks.
  • Support for intelligent defaults – It supports SageMaker intelligent defaults, allowing developers to set default values for parameters such as AWS and Identity and Access Management (IAM) roles and virtual private cloud (VPC) configurations. This streamlines the setup process, and SageMaker Core API will pick the default settings automatically from the environment.
  • Auto code completion – It enhances the developer experience by offering real-time suggestions and completions in popular integrated development environments (IDEs), reducing chances of syntax errors and speeding up the coding process.
  • Full parity with SageMaker APIs, including generative AI – It provides access to the SageMaker capabilities, including generative AI, through the core SDK, so developers can seamlessly use SageMaker Core without worrying about feature parity with Boto3.
  • Comprehensive documentation and type hints – It provides robust and comprehensive documentation and type hints so developers can understand the functionalities of the APIs and objects, write code faster, and reduce errors.

For this walkthrough, we use a straightforward generative AI lifecycle involving data preparation, fine-tuning, and a deployment of Meta’s Llama-3-8B LLM. We use the SageMaker Core SDK to execute all the steps.

Prerequsities

To get started with SageMaker Core, make sure Python 3.8 or greater is installed in the environment. There are two ways to get started with SageMaker Core:

  1. If not using SageMaker Python SDK, install the sagemaker-core SDK using the following code example.
    %pip install sagemaker-core

  2. If you’re already using SageMaker Python SDK, upgrade it to a version greater than or matching version 2.231.0. Any version above 2.231.0 has SageMaker Core preinstalled. The following code example shows the command for upgrading the SageMaker Python SDK.
    %pip install –upgrade sagemaker>=2.231.0

Solution walkthrough

To manage your ML workloads on SageMaker using SageMaker Core, use the steps in the following sections.

Data preparation

In this phase, prepare the training and test data for the LLM. Here, use a publicly available dataset Stanford Question Answering Dataset (SQuAD). The following code creates a ProcessingJob object using the static method create, specifying the script path, instance type, and instance count. Intelligent default settings fetch the SageMaker execution role, which simplifies the developer experience further. You didn’t need to provide the input data location and output data location because that also is supplied through intelligent defaults. For information on how to set up intelligent defaults, check out Configuring and using defaults with the SageMaker Python SDK.

from sagemaker_core.resources import ProcessingJob

# Initialize a ProcessingJob resource
processing_job = ProcessingJob.create(
    processing_job_name="llm-data-prep",
    script_path="s3://my-bucket/data-prep-script.py",
    role_arn=<<Execution Role ARN>>, # Intelligent default for execution role
    instance_type="ml.m5.xlarge",
    instance_count=1
)

# Wait for the ProcessingJob to complete
processing_job.wait()

Training

In this step, you use the pre-trained Llama-3-8B model and fine-tune it on the prepared data from the previous step. The following code snippet shows the training API. You create a TrainingJob object using the create method, specifying the training script, source directory, instance type, instance count, output path, and hyper-parameters.

from sagemaker_core.resources import TrainingJob
from sagemaker_core.shapes import HyperParameters

# Initialize a TrainingJob resource
training_job = TrainingJob.create(
    training_job_name="llm-fine-tune",
    estimator_entry_point="train.py",
    source_dir="s3://my-bucket/training-code",
    instance_type="ml.g5.12xlarge",
    instance_count=1,
    output_path="s3://my-bucket/training-output",
    hyperparameters=HyperParameters(
        learning_rate=0.00001,
        batch_size=8,
        epochs=3
    ),
    role_arn==<<Execution Role ARN>>, # Intelligent default for execution role
    input_data=processing_job.output # Resource chaining
)

# Wait for the TrainingJob to complete
training_job.wait()

For hyperparameters, you create an object, instead of supplying a dictionary. Use resource chaining by passing the output of the ProcessingJob resource as the input data for the TrainingJob.

You also use the intelligent defaults to get the SageMaker execution role. Wait for the training job to finish, and it will produce a model artifact, wrapped in a tar.gz, and store it in the output_path provided in the preceding training API.

Model creation and deployment

Deploying a model on a SageMaker endpoint consists of three steps:

  1. Create a SageMaker model object
  2. Create the endpoint configuration
  3. Create the endpoint

SageMaker Core provides an object-oriented interface for all three steps.

  1. Create a SageMaker model object

The following code snippet shows the model creation experience in SageMaker Core.

from sagemaker_core.shapes import ContainerDefinition
from sagemaker_core.resources import Model

# Create a Model resource
model = Model.create(
    model_name="llm-model",
    primary_container=ContainerDefinition(
        image="763104351884.dkr.ecr.us-west-2.amazonaws.com/djl-inference:0.29.0-tensorrtllm0.11.0-cu124",
        environment={"HF_MODEL_ID": "meta-llama/Meta-Llama-3-8B"}
    ),
    execution_role_arn=<<Execution Role ARN>>, # Intelligent default for execution role
    input_data=training_job.output # Resource chaining
)

Similar to the processing and training steps, you have a create method from model class. The container definition is an object now, specifying the container definition that includes the large model inference (LMI) container image and the HuggingFace model ID. You can also observie resource chaining in action where you pass the output of the TrainingJob as input data to the model.

  1. Create the endpoint configuration

Create the endpoint configuration. The following code snippet shows the experience in SageMaker Core.

from sagemaker_core.shapes import ProductionVariant
from sagemaker_core.resources import Model, EndpointConfig, Endpoint

# Create an EndpointConfig resource
endpoint_config = EndpointConfig.create(
    endpoint_config_name="llm-endpoint-config",
    production_variants=[
        ProductionVariant(
            variant_name="llm-variant",
            initial_instance_count=1,
            instance_type="ml.g5.12xlarge",
            model_name=model
        )
    ]
)

ProductionVariant is an object in itself now.

  1. Create the endpoint

Create the endpoint using the following code snippet.

endpoint = Endpoint.create(
endpoint_name=model_name,
endpoint_config_name=endpoint_config,  # Pass `EndpointConfig` object created above
)

This also uses resource chaining. Instead of supplying just the endpoint_config_name (in Boto3), you pass the whole endpoint_config object.

As we have shown in these steps, SageMaker Core simplifies the development experience by providing an object-oriented interface for interacting with SageMaker resources. The use of intelligent defaults and resource chaining reduces the amount of boilerplate code and manual parameter specification, resulting in more readable and maintainable code.

Cleanup

Any endpoint created using the code in this post will incur charges. Shut down any unused endpoints by using the delete() method.

A note on existing SageMaker Python SDK

SageMaker Python SDK will be using the SageMaker Core as its foundation and will benefit from the object-oriented interfaces created as part of SageMaker Core. Customers can choose to use the object-oriented approach while using the SageMaker Python SDK going forward.

Benefits

The SageMaker Core SDK offers several benefits:

  • Simplified development – By abstracting low-level details and providing intelligent defaults, developers can focus on building and deploying ML models without getting slowed down by repetitive tasks. It also relieves the developers of the cognitive overload of having to remember long and complex multilevel dictionaries. They can instead work on the object-oriented paradigm that developers are most comfortable with.
  • Increased productivity – Features like automatic code completion and type hints help developers write code faster and with fewer errors.
  • Enhanced readability – Dedicated resource classes and resource chaining result in more readable and maintainable code.
  • Lightweight integration with AWS Lambda – Because this SDK is lightweight (about 8 MB when unzipped), it is straightforward to build an AWS Lambda layer for SageMaker Core and use it for executing various steps in the ML lifecycle through Lambda functions.

Conclusion

SageMaker Core is a powerful addition to Amazon SageMaker, providing a streamlined and efficient development experience for ML practitioners. With its object-oriented interface, resource chaining, and intelligent defaults, SageMaker Core empowers developers to focus on building and deploying ML models without getting slowed down by complex orchestration of JSON structures. Check out the following resources to get started today on SageMaker Core:


About the authors

Vikesh Pandey is a Principal GenAI/ML Specialist Solutions Architect at AWS, helping customers from financial industries design, build and scale their GenAI/ML workloads on AWS. He carries an experience of more than a decade and a half working on entire ML and software engineering stack. Outside of work, Vikesh enjoys trying out different cuisines and playing outdoor sports.

sishwe-author-picShweta Singh is a Senior Product Manager in the Amazon SageMaker Machine Learning (ML) platform team at AWS, leading SageMaker Python SDK. She has worked in several product roles in Amazon for over 5 years. She has a Bachelor of Science degree in Computer Engineering and Masters of Science in Financial Engineering, both from New York University.

Read More

Create a data labeling project with Amazon SageMaker Ground Truth Plus

Create a data labeling project with Amazon SageMaker Ground Truth Plus

Amazon SageMaker Ground Truth is a powerful data labeling service offered by AWS that provides a comprehensive and scalable platform for labeling various types of data, including text, images, videos, and 3D point clouds, using a diverse workforce of human annotators. In addition to traditional custom-tailored deep learning models, SageMaker Ground Truth also supports generative AI use cases, enabling the generation of high-quality training data for artificial intelligence and machine learning (AI/ML) models. SageMaker Ground Truth includes a self-serve option and an AWS managed option known as SageMaker Ground Truth Plus. In this post, we focus on getting started with SageMaker Ground Truth Plus by creating a project and sharing your data that requires labeling.

Overview of solution

First, you fill out a consultation form on the Get Started with Amazon SageMaker Ground Truth page or, if you already have an AWS account, you submit a request project form on the SageMaker Ground Truth Plus console. An AWS expert contacts you to review your specific data labeling requirements. You can share any specific requirements such as subject matter expertise, language expertise, or geographic location of labelers. If you submitted a consultation form, you submit a request project form on the SageMaker Ground Truth Plus console and it will be approved without any further discussion. If you submitted a project request, then your project status changes from Review in progress to Request approved.

Next, you create your project team, which includes people that are collaborating with you on the project. Each team member receives an invitation to join your project. Now, you upload the data that requires labeling to an Amazon Simple Storage Solution (Amazon S3) bucket. To add that data to your project, go to your project portal and create a batch and include the S3 bucket URL. Every project consists of one or more batches. Each batch is made up of data objects to be labeled.

Now, the SageMaker Ground Truth Plus team takes over and sources annotators based on your specific data labeling needs, trains them on your labeling requirements, and creates a UI for them to label your data. After the labeled data passes internal quality checks, it is delivered back to an S3 bucket for you to use for training your ML models.

The following diagram illustrates the solution architecture.

Using the steps outlined in this post, you’ll be able to quickly get set up for your data labeling project. This includes requesting a new project, setting up a project team, and creating a batch, which includes the data objects you needed labeled.

Prerequisites

For this walkthrough, you should have the following prerequisites:

  • An AWS account.
  • The URI of the S3 bucket where your data is stored. The bucket should be in the US East (N. Virginia) AWS Region.
  • An AWS Identity and Access Management (IAM) user. If you’re the owner of your AWS account, then you have administrator access and can skip this step. If your AWS account is part of an AWS organization, then you can ask your AWS administrator to grant your IAM user the required permissions. The following identity-based policy specifies the minimum permissions required for your IAM user to perform all the steps in this post (provide the name of the S3 bucket where your data is stored:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "VisualEditor0",
                "Effect": "Allow",
                "Action": [
                    "groundtruthlabeling:ListProjects",
                    "groundtruthlabeling:GetIntakeFormStatus",
                    "groundtruthlabeling:CreateProject",
                    "sagemaker:ListWorkforces",
                    "sagemaker:PutLabelingPortalPolicy",
                    "sagemaker:GetLabelingPortalPolicy",
                    "iam:GetRole",
                    "iam:PassRole",
                    "iam:ListRoles",
                    "iam:CreateRole",
                    "iam:CreatePolicy",
                    "iam:AttachRolePolicy",
                    "s3:ListAllMyBuckets",
                    "cognito-idp:CreateUserPool",
                    "cognito-idp:ListUserPools",
                    "cognito-idp:ListGroups",
                    "cognito-idp:AdminAddUserToGroup",
                    "cognito-idp:AdminCreateUser",
                    "cognito-idp:CreateGroup",
                    "cognito-idp:CreateUserPoolClient",
                    "cognito-idp:CreateUserPoolDomain",
                    "cognito-idp:DescribeUserPool",
                    "cognito-idp:DescribeUserPoolClient",
                    "cognito-idp:ListUsersInGroup",
                    "cognito-idp:UpdateUserPool",
                    "cognito-idp:UpdateUserPoolClient"
                ],
                "Resource": "*"
            },
            {
                "Effect": "Allow",
                "Action": [
                    "s3:PutObject",
                    "s3:GetObject",
                    "s3:ListBucket"
                ],
                "Resource": [
                    "arn:aws:s3:::<your-S3-bucket-name>",
                    "arn:aws:s3:::<your-S3-bucket-name>/*"
                ]
            }
        ]
    }

Request a project 

Complete the following steps to request a project:

  1. On the SageMaker console, under Ground Truth in the navigation pane, choose Plus.
  2. Choose Request project.
  3. For Business email address, enter a valid email.
  4. For Project name, enter a descriptive name with no spaces or special characters.
  5. For Task type, choose the option that best describes the type of data you need labeled.
  6. For Contains PII, only turn on if your data contains personally identifiable information (PII).
  7. For IAM role, the role you choose grants SageMaker Ground Truth Plus permissions to access your data in Amazon S3 and perform a labeling job. You can use any of the following options to specify the IAM role:
    1. Choose Create an IAM role (recommended), which provides access to the S3 buckets you specify and automatically attaches the required permissions and trust policy to the role.
    2. Enter a custom IAM role ARN.
    3. Choose an existing role.

If you don’t have permissions to create an IAM role, you may ask your AWS administrator to create the role for you. When using an existing role or a custom IAM role ARN, the IAM role should have the following permissions policy and trust policy.

The following code is the permissions policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject",
                "s3:GetBucketLocation",
                "s3:ListBucket",
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::<input-bucket-name>",
                "arn:aws:s3:::<input-bucket-name>/*",
                "arn:aws:s3:::<output-bucket-name>",
                "arn:aws:s3:::<output-bucket-name>/*"         
            ]
        }
    ]
}

The following code is the trust policy:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "sagemaker-ground-truth-plus.amazonaws.com"
            },
            "Action": "sts:AssumeRole"
        }
    ]
}
  1. Choose Request project.

Under Ground Truth in the navigation pane, you can choose Plus to see your project listed in the Projects section with the status Review in progress.

An AWS representative will contact you within 72 hours to review your project requirements. When this review is complete, your project status will change from Review in progress to Request approved.

Create project team

SageMaker Ground Truth uses Amazon Cognito to manage the members of your workforce and work teams. Amazon Cognito is a service that you use to create identities for your workers. Complete the following steps to create a project team:

  1. On the SageMaker console, under Ground Truth in the navigation pane, choose Plus.
  2. Choose the Create project team.

The remaining steps depend on whether you create a new user group or import an existing group.

Option 1: Create a new Amazon Cognito user group

If you don’t want to import members from an existing Amazon Cognito user group in your account, or you don’t have any Amazon Cognito user groups in your account, you can use this option.

  1. When creating your project team, select Create a new Amazon Cognito user group.
  2. For Amazon Cognito user group name, enter a descriptive name with no spaces.
  3. For Email addresses, enter up to 50 addresses. Use a comma between addresses.
  4. Choose Preview invitation to see the email that is sent to the email addresses you provided.
  5. Choose Create project team.

Under Ground Truth in the navigation pane, choose Plus to see your project team listed in the Project team section. The email addresses you added are included in the Members section.

Option 2: Import existing Amazon Cognito user groups

If you have an existing Amazon Cognito user group in your account from which you want to import members, you can use this option.

  1. When creating your project team, select Import existing Amazon Cognito user groups.
  2. For Select existing Amazon Cognito user groups, choose the user group from which you want to import members.
  3. Choose Create project team.

Under Ground Truth in the navigation pane, choose Plus to see your project team listed in the Project team section. The email addresses you added are included in the Members section.

Access the project portal and Create Batch

You can use the project portal to create batches containing your unlabeled input data and track the status of your previously created batches in a project. To access the project portal, make sure that you have created at least one project and at least one project team with one verified member.

  1. On the SageMaker console, under Ground Truth in the navigation pane, choose Plus.
  2. Choose Open project portal.
  3. Log in to the project portal using your project team’s user credentials created in the previous step.

A list of all your projects is displayed on the project portal.

  1. Choose a project to open its details page.
  2. In the Batches section, choose Create batch.
  3. Enter the batch name, batch description, S3 location for input datasets, and S3 location for output datasets.
  4. Choose Submit.

To create a batch successfully, make sure you meet the following criteria:

  • Your S3 bucket is in the US East (N. Virginia) Region.
  • The maximum size for each file is no more than 2 GB.
  • The maximum number of files in a batch is 10,000.
  • The total size of a batch is less than 100 GB.
  • Your submitted batch is listed in the Batches section with the status Request submitted. When the data transfer is complete, the status changes to Data received.

Next, the SageMaker Ground Truth Plus team sets up data labeling workflows, which changes the batch status to In progress. Annotators label the data, and you complete your data quality check by accepting or rejecting the labeled data. Rejected objects go back to annotators to re-label. Accepted objects are delivered to an S3 bucket for you to use for training your ML models.

Conclusion

SageMaker Ground Truth Plus provides a seamless solution for building high-quality training datasets for your ML models. By using AWS managed expert labelers and automating the data labeling workflow, SageMaker Ground Truth Plus eliminates the overhead of building and managing your own labeling workforce. With its user-friendly interface and integrated tools, you can submit your data, specify labeling requirements, and monitor the progress of your projects with ease. As you receive accurately labeled data, you can confidently train your models, maintaining optimal performance and accuracy. Streamline your ML projects and focus on building innovative solutions with the power of SageMaker Ground Truth Plus.

To learn more, see Use Amazon SageMaker Ground Truth Plus to Label Data.


About the Authors

Joydeep Saha is a System Development Engineer at AWS with expertise in designing and implementing solutions to deliver business outcomes for customers. His current focus revolves around building cloud-native end-to-end data labeling solutions, empowering customers to unlock the full potential of their data and drive success through accurate and reliable machine learning models.

Ami Dani is a Senior Technical Program Manager at AWS focusing on AI/ML services. During her career, she has focused on delivering transformative software development projects for the federal government and large companies in industries as diverse as advertising, entertainment, and finance. Ami has experience driving business growth, implementing innovative training programs, and successfully managing complex, high-impact projects.

Read More

Create a multimodal chatbot tailored to your unique dataset with Amazon Bedrock FMs

Create a multimodal chatbot tailored to your unique dataset with Amazon Bedrock FMs

With recent advances in large language models (LLMs), a wide array of businesses are building new chatbot applications, either to help their external customers or to support internal teams. For many of these use cases, businesses are building Retrieval Augmented Generation (RAG) style chat-based assistants, where a powerful LLM can reference company-specific documents to answer questions relevant to a particular business or use case.

In the last few months, there has been substantial growth in the availability and capabilities of multimodal foundation models (FMs). These models are designed to understand and generate text about images, bridging the gap between visual information and natural language. Although such multimodal models are broadly useful for answering questions and interpreting imagery, they’re limited to only answering questions based on information from their own training document dataset.

In this post, we show how to create a multimodal chat assistant on Amazon Web Services (AWS) using Amazon Bedrock models, where users can submit images and questions, and text responses will be sourced from a closed set of proprietary documents. Such a multimodal assistant can be useful across industries. For example, retailers can use this system to more effectively sell their products (for example, HDMI_adaptor.jpeg, “How can I connect this adapter to my smart TV?”). Equipment manufacturers can build applications that allow them to work more effectively (for example, broken_machinery.png, “What type of piping do I need to fix this?”). This approach is broadly effective in scenarios where image inputs are important to query a proprietary text dataset. In this post, we demonstrate this concept on a synthetic dataset from a car marketplace, where a user can upload a picture of a car, ask a question, and receive responses based on the car marketplace dataset.

Solution overview

For our custom multimodal chat assistant, we start by creating a vector database of relevant text documents that will be used to answer user queries. Amazon OpenSearch Service is a powerful, highly flexible search engine that allows users to retrieve data based on a variety of lexical and semantic retrieval approaches. This post focuses on text-only documents, but for embedding more complex document types, such as those with images, see Talk to your slide deck using multimodal foundation models hosted on Amazon Bedrock and Amazon SageMaker.

After the documents are ingested in OpenSearch Service (this is a one-time setup step), we deploy the full end-to-end multimodal chat assistant using an AWS CloudFormation template. The following system architecture represents the logic flow when a user uploads an image, asks a question, and receives a text response grounded by the text dataset stored in OpenSearch.

System architecture

The logic flow for generating an answer to a text-image response pair routes as follows:

  • Steps 1 and 2 – To start, a user query and corresponding image are routed through an Amazon API Gateway connection to an AWS Lambda function, which serves as the processing and orchestrating compute for the overall process.
  • Step 3 – The Lambda function stores the query image in Amazon S3 with a specified ID. This may be useful for later chat assistant analytics.
  • Steps 4–8 – The Lambda function orchestrates a series of Amazon Bedrock calls to a multimodal model, an LLM, and a text-embedding model:
    • Query the Claude V3 Sonnet model with the query and image to produce a text description.
    • Embed a concatenation of the original question and the text description with the Amazon Titan Text Embeddings
    • Retrieve relevant text data from OpenSearch Service.
    • Generate a grounded response to the original question based on the retrieved documents.
  • Step 9 – The Lambda function stores the user query and answer in Amazon DynamoDB, linked to the Amazon S3 image ID.
  • Steps 10 and 11 – The grounded text response is sent back to the client.

There is also an initial setup of the OpenSearch Index, which is done using an Amazon SageMaker notebook.

Prerequisites

To use the multimodal chat assistant solution, you need to have a handful of Amazon Bedrock FMs available.

  1. On the Amazon Bedrock console, choose Model access in the navigation pane.
  2. Choose Manage model access.
  3. Activate all the Anthropic models, including Claude 3 Sonnet, as well as the Amazon Titan Text Embeddings V2 model, as shown in the following screenshot.

For this post, we recommend activating these models in the us-east-1 or us-west-2 AWS Region. These should become immediately active and available.

Bedrock model access

Simple deployment with AWS CloudFormation

To deploy the solution, we provide a simple shell script called deploy.sh, which can be used to deploy the end-to-end solution in different Regions. This script can be acquired directly from Amazon S3 using aws s3 cp s3://aws-blogs-artifacts-public/artifacts/ML-16363/deploy.sh .

Using the AWS Command Line Interface (AWS CLI), you can deploy this stack in various Regions using one of the following commands:

bash deploy.sh us-east-1

or

bash deploy.sh us-west-2

The stack may take up to 10 minutes to deploy. When the stack is complete, note the assigned physical ID of the Amazon OpenSearch Serverless collection, which you will use in further steps. It should look something like zr1b364emavn65x5lki8. Also, note the physical ID of the API Gateway connection, which should look something like zxpdjtklw2, as shown in the following screenshot.

cloudformation output

Populate the OpenSearch Service index

Although the OpenSearch Serverless collection has been instantiated, you still need to create and populate a vector index with the document dataset of car listings. To do this, you use an Amazon SageMaker notebook.

  1. On the SageMaker console, navigate to the newly created SageMaker notebook named MultimodalChatbotNotebook (as shown in the following image), which will come prepopulated with car-listings.zip and Titan-OS-Index.ipynb.
  1. After you open the Titan-OS-Index.ipynb notebook, change the host_id variable to the collection physical ID you noted earlier.Sagemaker notebook
  1. Run the notebook from top to bottom to create and populate a vector index with a dataset of 10 car listings.

After you run the code to populate the index, it may still take a few minutes before the index shows up as populated on the OpenSearch Service console, as shown in the following screenshot. 

Test the Lambda function

Next, test the Lambda function created by the CloudFormation stack by submitting a test event JSON. In the following JSON, replace your bucket with the name of your bucket created to deploy the solution, for example, multimodal-chatbot-deployment-ACCOUNT_NO-REGION.

{
"bucket": "multimodal-chatbot-deployment-ACCOUNT_NO-REGION",
"key": "jeep.jpg",
"question_text": "How much would a car like this cost?"
}

You can set up this test by navigating to the Test panel for the created lambda function and defining a new test event with the preceding JSON. Then, choose Test on the top right of the event definition.

If you are querying the Lambda function from another bucket than those allowlisted in the CloudFormation template, make sure to add the relevant permissions to the Lambda execution role.

The Lambda function may take between 10–20 seconds to run (mostly dependent on the size of your image). If the function performs properly, you should receive an output JSON similar to the following code block. The following screenshot shows the successful output on the console.

{
  "statusCode": 200,
  "body": ""Based on the 2013 Jeep Grand Cherokee SRT8 listing, a heavily modified Jeep like the one described could cost around $17,000 even with significant body damage and high mileage. The powerful engine, custom touches, and off-road capabilities likely justify that asking price.""
}

Note that if you just enabled model access, it may take a few minutes for access to propagate to the Lambda function.

Test the API

For integration into an application, we’ve connected the Lambda function to an API Gateway connection that can be pinged from various devices. We’ve included a notebook within the SageMaker notebook that allows you to query the system with a question and an image and return a response. To use the notebook, replace the API_GW variable with the physical ID of the API Gateway connection that was created using the CloudFormation stack and the REGION variable with the Region your infrastructure was deployed in. Then, making sure your image location and query are set correctly, run the notebook cell. Within 10–20 seconds, you should receive the output of your multimodal query sourced from your own text dataset. This is shown in the following screenshot.

Note that the API Gateway connection is only accessible from this specific notebook, and more comprehensive security and permission elements are required to productionize the system.

Qualitative results

A grounded multimodal chat assistant, where users can submit images with queries, can be useful in many settings. We demonstrate this application with a dataset of cars for sale. For example, a user may have a question about a car they’re looking at, so they snap a picture and submit a question, such as “How much might a car like this cost?” Rather than answering the question with generic information that the LLM was trained on (which may be out of date), responses will be grounded with your local and specific car sales dataset. In this use case, we took images from Unsplash and used a synthetically created dataset of 10 car listings to answer questions. The model and year of the 10 car listings are shown in the following screenshot.

For the examples in the following table, you can observe in the answer, not only has the vision language model (VLM) system identified the correct cars in the car listings that are most similar to the input image, but also it has answered the questions with specific numbers, costs, and locations that are only available from our closed cars dataset car-listings.zip.

Question Image Answer
How much would a car like this cost? The 2013 Jeep Grand Cherokee SRT8 listing is most relevant, with an asking price of $17,000 despite significant body damage from an accident. However, it retains the powerful 470 hp V8 engine and has been well-maintained with service records.
What is the engine size of this car? The car listing for the 2013 Volkswagen Beetle mentions it has a fuel-efficient 1.8L turbocharged engine. No other engine details are provided in the listings.
Where in the world could I purchase a used car like this? Based on the car listings provided, the 2021 Tesla Model 3 for sale seems most similar to the car you are interested in. It’s described as a low mileage, well-maintained Model 3 in pristine condition located in the Seattle area for $48,000.

Latency and quantitative results

Because speed and latency are important for chat assistants and because this solution consists of multiple API calls to FMs and data stores, it’s interesting to measure the speed of each step in the process. We did an internal analysis of the relative speeds of the various API calls, and the following graph visualizes the results.

From slowest to fastest, we have the call to the Claude V3 Vision FM, which takes on average 8.2 seconds. The final output generation step (LLM Gen on the graph in the screenshot) takes on average 4.9 seconds. The Amazon Titan Text Embeddings model and OpenSearch Service retrieval process are much faster, taking 0.28 and 0.27 seconds on average, respectively.

In these experiments, the average time for the full multistage multimodal chatbot is 15.8 seconds. However, the time can be as low as 11.5 seconds overall if you submit a 2.2 MB image, and it could be much lower if you use even lower-resolution images.

Clean up

To clean up the resources and avoid charges, follow these steps:

  1. Make sure all the important data from Amazon DynamoDB and Amazon S3 are saved
  2. Manually empty and delete the two provisioned S3 buckets
  3. To clean up the resources, delete the deployed resource stack from the CloudFormation console.

Conclusion

From applications ranging from online chat assistants to tools to help sales reps close a deal, AI assistants are a rapidly maturing technology to increase efficiency across sectors. Often these assistants aim to produce answers grounded in custom documentation and datasets that the LLM was not trained on, using RAG. A final step is the development of a multimodal chat assistant that can do so as well—answering multimodal questions based on a closed text dataset.

In this post, we demonstrated how to create a multimodal chat assistant that takes images and text as input and produces text answers grounded in your own dataset. This solution will have applications ranging from marketplaces to customer service, where there is a need for domain-specific answers sourced from custom datasets based on multimodal input queries.

We encourage you to deploy the solution for yourself, try different image and text datasets, and explore how you can orchestrate various Amazon Bedrock FMs to produce streamlined, custom, multimodal systems.


About the Authors

Emmett Goodman is an Applied Scientist at the Amazon Generative AI Innovation Center. He specializes in computer vision and language modeling, with applications in healthcare, energy, and education. Emmett holds a PhD in Chemical Engineering from Stanford University, where he also completed a postdoctoral fellowship focused on computer vision and healthcare.

Negin Sokhandan is a Principle Applied Scientist at the AWS Generative AI Innovation Center, where she works on building generative AI solutions for AWS strategic customers. Her research background is statistical inference, computer vision, and multimodal systems.

Yanxiang Yu is an Applied Scientist at the Amazon Generative AI Innovation Center. With over 9 years of experience building AI and machine learning solutions for industrial applications, he specializes in generative AI, computer vision, and time series modeling.

Read More

Design secure generative AI application workflows with Amazon Verified Permissions and Amazon Bedrock Agents

Design secure generative AI application workflows with Amazon Verified Permissions and Amazon Bedrock Agents

Amazon Bedrock Agents enable generative AI applications to perform multistep tasks across various company systems and data sources. They orchestrate and analyze the tasks and break them down into the correct logical sequences using the reasoning abilities of the foundation model (FM). Agents automatically call the necessary APIs to interact with the company systems and processes to fulfill the request. Throughout this process, agents determine whether they can proceed or if additional information is needed.

Customers can build innovative generative AI applications using Amazon Bedrock Agents’ capabilities to intelligently orchestrate their application workflows. When building such workflows, it can be challenging for customers to apply fine-grained access controls to make sure that the application’s workflow operates only on the authorized data based on the application user’s entitlements. Controlling access to resources based on user context, roles, actions and resource conditions can be challenging to maintain in an application workflow because that would require hardcoding several rules in your application or building your own authorization system to externalize those rules.

Instead of building your own authorization system for fine-grained access controls in your application workflows, you can integrate Amazon Verified Permissions into the agent’s workflow to apply contextually aware fine-grained access controls. Verified Permissions is a scalable permissions management and authorization service for custom applications built by you. Verified Permissions helps developers build secure applications faster by externalizing the authorization component and centralizing policy management and administration.

In this post, we demonstrate how to design fine-grained access controls using Verified Permissions for a generative AI application that uses Amazon Bedrock Agents to answer questions about insurance claims that exist in a claims review system using textual prompts as inputs and outputs. In our insurance claims system use case, there are two types of users: claims administrators and claims adjusters. Both are capable of listing open claims, but only one is capable of reading claim detail and making changes. We also show how to restrict permissions using custom attributes such as a user’s region for filtering insurance claims. In this post, the term region doesn’t refer to an AWS Region, but rather to a business-defined region.

Solution overview

In this solution design, we assume that the customer has claims records in an Amazon DynamoDB table and would like to build a chat-based application to answer frequently asked questions about their claims. This chat assistant will be used internally by claims administrators and claims adjusters to answer their clients’ questions.

The following is a list of actions that the claims team needs to perform to answer their clients’ questions:

  • Show me a list of my open claims
  • Show me claim detail for an input claim number
  • Update the status to closed for the input claim number

The customer has the following access control requirements for their claims system:

  • A claims administrator can list claims across various geographic areas, but they can’t read individual claim records
  • A claims adjuster can list claims for their region and can read and update the records of claims assigned to them. However, a claims adjuster can’t access claims from other regions.
  • is placed into a group in Amazon Cognito, where their application-level permissions are set and maintained
  • The customer would like to use Verified Permissions to externalize entity and record level authorization decisions without hard coding the application logic

To improve the performance of the chat assistant, the customer uses FMs available on Amazon Bedrock. To retrieve the necessary information from the claims table and dynamically orchestrate the requests, the customer uses Amazon Bedrock Agents together with Verified Permissions to provide fine-grained authorization for the agents’ invocation.

The application architecture for building the example chat-based Generative AI Claims application with fine-grained access controls is shown in the following diagram.

Figure 1. Architectural diagram for user flow

The application architecture flow is as follows:

  1. User accesses the Generative AI Claims web application (App).
  2. The App authenticates the user with the Amazon Cognito service and issues an ID token and an access tokenID token has the user’s identity and custom attributes.
  3. Using the App, the user sends a request asking to “list the open claims.” The request is sent along with the user’s ID token and access token. The App calls the Claims API Gateway API to run the claims proxy passing user requests and tokens.
  4. Claims API Gateway runs the Custom Authorizer to validate the access token.
  5. When access token validation is successful, the Claims API Gateway sends the user request to the Claims Proxy.
  6. The Claims Proxy invokes the Amazon Bedrock agent passing the user request and ID token. The Amazon Bedrock agent is configured to use Anthropic’s Claude model and to invoke actions using the Claims Agent Helper AWS Lambda
  7. Amazon Bedrock Agent uses chain-of-thought-prompting and builds the list of API actions to run with the help of Claims Agent Helper.
  8. The Claims Agent Helper retrieves claim records from Claims DB and constructs a claims list object. For this example, we are providing hard-coded examples in the Lambda function and no DynamoDB was added to the example solution provided. However, we provide the component on the architecture for representing real-life use cases where the data is stored outside the Lambda
  9. The Claims Agent Helper retrieves the user’s metadata (that is, their name) from ID token, builds the Verified Permissions data entities, and makes the Verified Permissions authorization request. This request contains the principal (user and role), action (that is, ListClaim) and resource (Claim). Verified Permissions evaluates the request against the Verified Permissions policies and returns an Allow or Deny decision. Subsequently, the Claims Agent Helper filters the claims based on that decision. Verified Permissions has “default deny” functionality, meaning that in the absence of an explicit allow, the service defaults to an implicit deny. If there is an explicit Deny in the policies involved in the request, Verified Permissions denies the request.
  10. The Claims Amazon Bedrock Agent receives the authorized list of claims, augments the prompt and sends it to the Claude model for completion. The agent returns the completion back to the user.

Fine-grained access control flows

Based on the customer’s access control requirements, there are three fine-grained access control flows as depicted in the following system sequence diagrams.

Use case: Claims administrator can list claims across regions

The following diagram shows how the claims administrator can list claims across regions.

Figure 2: Claims administrator 'list claims' allow

The following diagram depicts how the claims administrator’s fine-grained access to the claim record is run. In this diagram, notice a deny decision from Verified Permissions. This is because the principal’s role isn’t ClaimsAdjuster.

Figure 3: Claims administrator 'list claims' deny

Use case: Claims adjuster can see claims they own

The following diagram depicts how the claims adjuster’s fine-grained access to retrieve claim details is run. In this diagram, notice the allow decision from Verified Permissions. This is because the principal’s role is ClaimsAdjuster and the resource owner (that is, claim owner) matches the user principal (that is, user=alice).

Figure 4: Claims adjuster 'list claims' allow

The following diagram depicts how the claims adjuster’s fine-grained access to list open claims is run. In this diagram, notice the allow decision from Verified Permissions. This is because the principal’s group is ClaimsAdjuster and the region on the resource matches the principal’s region. As a result of this region filter on the authorization policy, only open claims for the user’s region are returned. Verified Permissions acts on principal, action, and individual resource (that is, a claim record) for the authorization decision. Therefore, the Lambda function needs to iterate through the list of open claims and make an isAuthorized request for each claim record. If this results in a performance issue, you can use the BatchIsAuthorized API and send multiple authzRequest in one API call.

Figure 5: Claims adjuster 'list claims' allow or deny

Entities design considerations

When designing fine-grained data access controls, it is best practice to start with the entity-relationship diagram (ERD) for the application. For our claims application, the user will operate on claim records to retrieve a list of claims records, get the details for an individual claim record, or update the status of a claim record. The following diagram is the ERD for this application modeled in Verified Permissions. With Verified Permissions, you can apply both role-based access control (RBAC) and attribute-based access control (ABAC).

Figure 6: Entity relationship diagram for the application

Here is a brief description of each entity and attributes that will be used for RBAC and ABAC against claim records.

  • Application – The application is a chat-based generative AI application using Amazon Bedrock Agents to understand the questions and retrieve the relevant claims data to assist claims administrators and claims adjusters.
  • Claim – The claim represents an insurance claim record that is stored in the DynamoDB table. The claims system stores claim records and the chatbot application allows users to retrieve and update these records.
  • User – The user.
  • Role – The role represents a user’s access within the application. Here is a list of available roles:
    • Claims administrators – Can list claims across various geographic regions, but they can’t read individual claim records
    • Claims adjusters – Can list claims for their region and read and update their claim records

The roles are managed through Amazon Cognito and Verified Permissions. Cognito maintains a record of which role a user is assigned to and includes this information in the token. Verified Permissions maintains a record of what that role is permitted to do. Fine-grained access controls exist to make sure that users have appropriate permissions for their roles, restricting access to sensitive claim data based on geographic regions and user groups.

Fine-grained authorization: Policy design

The Actions diagram view lists the types of Principals you have configured in your policy store, the Actions they are eligible to perform, and the Resources they are eligible to perform actions on. The lines between entities indicate your ability to create a policy that allows a principal to take an action on a resource. The following image shows the actions diagram from Verified Permissions for our insurance claims use case. The User principal will have access to the Get, List, and Update actions. The resources are the Application and the Claim entity within the application. This diagram generates the underlying schema that governs the policy definition.

Figure 7: Policy schema from Amazon Verified Permissions

Use case: Claims administrator can list all claim records across regions

A policy is a statement that either permits or forbids a principal to take one or more actions on a resource. Each policy is evaluated independently of other policies. The Verified Permissions policy for this use case is shown in the following code example. In this policy, the principal (that is, user Bob), is assigned the role of claims administrator.

permit (
    principal in avp::claim::app::Role::"ClaimsAdministrator",
    action in [
    avp::claim::app::Action::"ListClaim"
    ],
    resource
) ;

Use case: Claims administrator can’t access claim detail record

The Verified Permissions policy for this use case is shown in the following code example. The use of explicit “forbid” policies is a valid practice.

forbid (
    principal in avp::claim::app::Role::"ClaimsAdministrator",
    action in [
    avp::claim::app::Action::"GetClaim"
    ],
    resource
) ;

Use case: Claims adjuster can list claims they own in their region

The Verified Permissions policy for this use case is shown in the following code example. In this policy, the principal (that is, user Alice) is assigned the role of claims adjuster and their region is passed as a custom attribute in the ID token.

permit (
    principal in avp::claim::app::Role::"ClaimsAdjuster",
    action in [
    avp::claim::app::Action::"ListClaim"
    ],
    resource
) when {
    resource has owner &&
    principal == resource.owner &&
    principal has custom &&
    principal.custom has region &&
    principal.custom.region == resource.region
};

Use case: Claims adjuster can retrieve or update a claim they own

permit (
    principal in avp::claim::app::Role::"ClaimsAdjuster",
    action in [
    avp::claim::app::Action::"GetClaim",
     avp::claim::app::Action::"UpdateClaim"
    ],
    resource
) when {
    principal == resource.owner&&
    principal has custom &&
    principal.custom has region &&
    principal.custom.region == resource.region
};

Authentication design considerations

The configuration of Amazon Cognito for this use case followed the security practices included as part of the standard configuration workflow: a strong password policy, multi-factor authentication (MFA), and a client secret. When using Amazon Cognito with Verified Permissions, your application can pass user pool access or identity tokens to Verified Permissions to make the allow or deny decision. Verified Permissions evaluates the user’s request based on the policies it has stored in the policy store.

For custom attributes, we are using region to restrict which claims a claims adjuster can see, excluding claims made in regions outside the adjuster’s own region. We are also using role as a custom attribute to provide that information in the ID token that is passed to the Amazon Bedrock agent. When the user is registered in the Cognito user pool, these custom attributes will be recorded as part of the sign-up process.

Amazon Cognito integrates with Verified Permissions through the Identity sources section in the console. The following screenshot shows that we’ve connected our Cognito user pool to the Amazon Verified Permissions policy store.

Figure 8: Amazon Verified Permissions policy stores by ID

Fine-grained authorization: Passing ID token to the Amazon Bedrock agent

When the user is authenticated against the Cognito user pool, it returns an ID token and access token to the client application. The ID token will be passed through an API gateway and a proxy Lambda through SessionAttributes on the invoke_agent call.

# Invoke the agent API
response = bedrock_agent_runtime_client.invoke_agent(
    …    
    sessionState={
        'sessionAttributes': {
            'authorization_header': '<AUTHORIZATION_HEADER>'
        }
    },
)

The header is then retrieved from the Lambda event in the Action Group Lambda function and Verified Permissions is used to verify the user’s access against the desired action.

# Retrieve session attributes from event and use it to validate action
sessAttr = event.get("sessionAttributes")
auth, reason = verifyAccess(sessionAttributes, action_id)

Fine-grained authorization: Integration with Amazon Bedrock Agents

The ID token issued by Cognito contains the user’s identity and custom attributes. This ID token is passed to the Amazon Bedrock agent, and the Agent Helper Lambda retrieves that token from the agent’s session attribute. Then, the Agent Helper Lambda retrieves open claim records from DynamoDB and constructs the Verified Permissions schema entities and makes the isAuthorized API call.

Because Verified Permissions resources operate at the individual record level (that is, a single claim record), you need to iterate over the claims list object and make the isAuthorized API call for the authorization decision and then create the filtered claims list. The filtered claims list is then passed back to the caller. As a result, the claims adjuster will only see claims for their region, while a claims administrator can see claims across all regions.

The Amazon Bedrock agent then uses this filtered claim list to complete the user’s request to list claims. The chat application can only access the claims records that the user is authorized to view, providing the fine-grained access control integrated with the Amazon Bedrock agent workflow.

Getting started

Check out our code to get started developing your secure generative AI application using Amazon Verified Permissions. We provide you with an end-to-end implementation of the architecture described in this post and a demo UI you can use to test the permissions of different users. Update this example to implement generative AI applications that connect with your use case setup.

Conclusion

In this post, we discussed the challenges in applying fine-grained access controls for agent workflows in a generative AI application. We shared an application architecture for building an example chat-based generative AI application that uses Amazon Bedrock Agents to orchestrate workflows and applies fine-grained access controls using Amazon Verified Permissions. We discussed how to design fine-grained access permissions through the design of persona-based access control workflows. If you are looking for a scalable and secure way to apply fine-grained permissions to your generative AI agent-based workflows, give this solution a try and leave your feedback.


About the authors

Ram Vittal is a Principal ML Solutions Architect at AWS. He has over 3 decades of experience architecting and building distributed, hybrid, and cloud applications. He is passionate about building secure, scalable, reliable AI/ML and big data solutions to help enterprise customers with their cloud adoption and optimization journey to improve their business outcomes. In his spare time, he rides his motorcycle and walks with his three-year old sheep-a-doodle!

Samantha Wylatowska is a Solutions Architect at AWS. With a background in DevSecOps, her passion lies in guiding organizations towards secure operational efficiency, leveraging the power of automation for a seamless cloud experience. In her free time, she’s usually learning something new through music, literature, or film.

Anil Nadiminti is a Senior Solutions Architect at AWS specializing in empowering organizations to harness cloud computing and AI for digital transformation and innovation. His expertise in architecting scalable solutions and implementing data-driven strategies enables companies to innovate and thrive in today’s rapidly evolving technological landscape.

Michael Daniels is an AI/ML Specialist at AWS. His expertise lies in building and leading AI/ML and generative AI solutions for complex and challenging business problems, which is enhanced by his PhD from the Univ. of Texas and his MSc in computer science specialization in machine learning from the Georgia Institute of Technology. He excels in applying cutting-edge cloud technologies to innovate, inspire, and transform industry-leading organizations while also effectively communicating with stakeholders at any level or scale. In his spare time, you can catch Michael skiing or snowboarding.

Maira Ladeira Tanke is a Senior Generative AI Data Scientist at AWS. With a background in machine learning, she has over 10 years of experience architecting and building AI applications with customers across industries. As a technical lead, she helps customers accelerate their achievement of business value through generative AI solutions on Amazon Bedrock. In her free time, Maira enjoys traveling, playing with her cat, and spending time with her family someplace warm.

Read More

Boost productivity by using AI in cloud operational health management

Boost productivity by using AI in cloud operational health management

Modern organizations increasingly depend on robust cloud infrastructure to provide business continuity and operational efficiency. Operational health events – including operational issues, software lifecycle notifications, and more – serve as critical inputs to cloud operations management. Inefficiencies in handling these events can lead to unplanned downtime, unnecessary costs, and revenue loss for organizations.

However, managing cloud operational events presents significant challenges, particularly in complex organizational structures. With a vast array of services and resource footprints spanning hundreds of accounts, organizations can face an overwhelming volume of operational events occurring daily, making manual administration impractical. Although traditional programmatic approaches offer automation capabilities, they often come with significant development and maintenance overhead, in addition to increasingly complex mapping rules and inflexible triage logic.

This post shows you how to create an AI-powered, event-driven operations assistant that automatically responds to operational events. It uses Amazon Bedrock, AWS Health, AWS Step Functions, and other AWS services. The assistant can filter out irrelevant events (based on your organization’s policies), recommend actions, create and manage issue tickets in integrated IT service management (ITSM) tools to track actions, and query knowledge bases for insights related to operational events. By orchestrating a group of AI endpoints, the agentic AI design of this solution enables the automation of complex tasks, streamlining the remediation processes for cloud operational events. This approach helps organizations overcome the challenges of managing the volume of operational events in complex, cloud-driven environments with minimal human supervision, ultimately improving business continuity and operational efficiency.

Event-driven operations management

Operational events refer to occurrences within your organization’s cloud environment that might impact the performance, resilience, security, or cost of your workloads. Some examples of AWS-sourced operational events include:

  1. AWS Health events — Notifications related to AWS service availability, operational issues, or scheduled maintenance that might affect your AWS resources.
  2. AWS Security Hub findings — Alerts about potential security vulnerabilities or misconfigurations identified within your AWS environment.
  3. AWS Cost Anomaly Detection alerts – Notifications about unusual spending patterns or cost spikes.
  4. AWS Trusted Advisor findings — Opportunities for optimizing your AWS resources, improving security, and reducing costs.

However, operational events aren’t limited to AWS-sourced events. They can also originate from your own workloads or on-premises environments. In principle, any event that can integrate with your operations management and is of importance to your workload health qualifies as an operational event.

Operational event management is a comprehensive process that provides efficient handling of events from start to finish. It involves notification, triage, progress tracking, action, and archiving and reporting at a large scale. The following is a breakdown of the typical tasks included in each step:

  1. Notification of events:
    1. Format notifications in a standardized, user-friendly way.
    2. Dispatch notifications through instant messaging tools or emails.
  2. Triage of events:
    1. Filter out irrelevant or noise events based on predefined company policies.
    2. Analyze the events’ impact by examining their metadata and textual description.
    3. Convert events into actionable tasks and assigning responsible owners based on roles and responsibilities.
    4. Log tickets or page the appropriate personnel in the chosen ITSM tools.
  3. Status tracking of events and actions:
    1. Group related events into threads for straightforward management.
    2. Update ticket statuses based on the progress of event threads and action owner updates.
  4. Insights and reporting:
    1. Query and consolidate knowledge across various event sources and tickets.
    2. Create business intelligence (BI) dashboards for visual representation and analysis of event data.

A streamlined process should include steps to ensure that events are promptly detected, prioritized, acted upon, and documented for future reference and compliance purposes, enabling efficient operational event management at scale. However, traditional programmatic automation has limitations when handling multiple tasks. For instance, programmatic rules for event attribute-based noise filtering lack flexibility when faced with organizational changes, expansion of the service footprint, or new data source formats, leading growing complexity.

Automating impact analysis in traditional automation through keyword matching on free-text descriptions is impractical. Converting events to tickets requires manual effort to generate action hints and lacks correlation to the originating events. Extracting event storylines from long, complex threads of event updates is challenging.

Let’s explore an AI-based solution to see how it can help address these challenges and improve productivity.

Solution overview

The solution uses AWS Health and AWS Security Hub findings as sources of operational events to demonstrate the workflow. It can be extended to incorporate additional types of operational events—from AWS or non-AWS sources—by following an event-driven architecture (EDA) approach.

The solution is designed to be fully serverless on AWS and can be deployed as infrastructure as code (IaC) by usingf the AWS Cloud Development Kit (AWS CDK).

Slack is used as the primary UI, but you can implement the solution using other messaging tools such as Microsoft Teams.

The cost of running and hosting the solution depends on the actual consumption of queries and the size of the vector store and the Amazon Kendra document libraries. See Amazon Bedrock pricing, Amazon OpenSearch pricing and Amazon Kendra pricing for pricing details.

The full code repository is available in the accompanying GitHub repo.

The following diagram illustrates the solution architecture.

Solution architecture diagram

Figure – solution architecture diagram

Solution walk-through

The solution consists of three microservice layers, which we discuss in the following sections.

Event processing layer

The event processing layer manages notifications, acknowledgments, and triage of actions. Its main logic is controlled by two key workflows implemented using Step Functions.

  • Event orchestration workflow – This workflow is subscribed to and invoked by operational events delivered to the main Amazon EventBridge hub. It sends HealthEventAdded or SecHubEventAdded events back to the main event hub following the workflow in the following figure.

Event orchestration workflow

Figure – Event orchestration workflow

  • Event notification workflow – This workflow formats notifications that are exchanged between Slack chat and backend microservices. It listens to control events such as HealthEventAdded and SecHubEventAdded.

Event notification workflow

Figure – Event notification workflow

AI layer

The AI layer handles the interactions between Agents for Amazon Bedrock, Knowledge Bases for Amazon Bedrock, and the UI (Slack chat). It has several key components.

OpsAgent is an operations assistant powered by Anthropic Claude 3 Haiku on Amazon Bedrock. It reacts to operational events based on the event type and text descriptions. OpsAgent is supported by two other AI model endpoints on Amazon Bedrock with different knowledge domains. An action group is defined and attached to OpsAgent, allowing it to solve more complex problems by orchestrating the work of AI endpoints and taking actions such as creating tickets without human supervisions.

OpsAgent is pre-prompted with required company policies and guidelines to perform event filtering, triage, and ITSM actions based on your requirements. See the sample escalation policy in the GitHub repo (between escalation_runbook tags).

OpsAgent uses two supporting AI model endpoints:

  1. The events expert endpoint uses the Amazon Titan in Amazon Bedrock foundation model (FM) and Amazon OpenSearch Serverless to answer questions about operational events using Retrieval Augmented Generation (RAG).
  2. The ask-aws endpoint uses the Amazon Titan model and Amazon Kendra as the RAG source. It contains the latest AWS documentation on selected topics. You must syncronize the Amazon Kendra data sources to ensure the underlying AI model is using the latest documentation. Your can do this using the AWS Management Console after the solution is deployed.

These dedicated endpoints with specialized RAG data sources help break down complex tasks, improve accuracy, and make sure the correct model is used.

The AI layer also includes of two AI orchestration Step Functions workflows. The workflows manage the AI agent, AI model endpoints, and the interaction with the user (through Slack chat):

  • The AI integration workflow defines how the operations assistant reacts to operational events based on the event type and the text descriptions of those events. The following figure illustrates the workflow.

AI integration workflow

Figure – AI integration workflow

  • The AI chatbot workflow manages the interaction between users and the OpsAgent assistant through a chat interface. The chatbot handles chat sessions and context.

AI chatbot workflow

Figure: AI chatbot workflow

Archiving and reporting layer

The archiving and reporting layer handles streaming, storing, and extracting, transforming, and loading (ETL) operational event data. It also prepares a data lake for BI dashboards and reporting analysis. However, this solution doesn’t include an actual dashboard implementation; it prepares an operational event data lake for later development.

Use case examples

You can use this solution for automated event notification, autonomous event acknowledgement, and action triage by setting up a virtual supervisor or operator that follows your organization’s policies. The virtual operator is equipped with multiple AI capabilities—each of which is specialized in a specific knowledge domain—such as generating recommended actions or taking actions to issue tickets in ITSM tools, as shown in the following figure.

use case example 1

Figure – use case example 1

The virtual event supervisor filters out noise based on your policies, as illustrated in the following figure.

use case example 2

Figure – use case example 2

AI can use the tickets that are related to a specific AWS Health event to provide the latest status updates on those tickets, as shown in the following figure.

use case example 3

Figure – use case example 3

The following figure shows how the assistant evaluates complex threads of operational events to provide valuable insights.

use case example 4

Figure – use case example 4

The following figure shows a more sophisticated use case.

use case example 5

Figure – use case example 5

Prerequisites

To deploy this solution, you must meet the following prerequisites:

  • Have at least one AWS account with permissions to create and manage the necessary resources and components for the application. If you don’t have an AWS account, see How do I create and activate a new Amazon Web Services account?. The project uses a typical setup of two accounts, where one is the organization’s health administrator account and the other is the worker account hosting backend microservices. The worker account can be the same as the administrator account if you choose to use a single account setup.
  • Make sure you have access to Amazon Bedrock FMs in your preferred AWS Region in the worker account. The FMs used in the post are Anthropic Claude 3 Haiku, and Amazon Titan Text G1 – Premier.
  • Enable the AWS Health Organization view and delegate an administrator account in your AWS management account if you want to manage AWS Health events across your entire organization. Enabling AWS Health Organization view is optional if you only need to source operational events from a single account. Delegation of a separate administrator account for AWS Health is also optional if you want to manage all operational events from your AWS management account.
  • Enable AWS Security Hub in your AWS management account. Optionally, enable Security Hub with Organizations integration if you want to monitor security findings for the entire organization instead of just a single account.
  • Have a Slack workspace with permissions to configure a Slack app and set up a channel.
  • Install the AWS CDK in your local environment, bootstrapped in your AWS accounts, it will be used for solution deployment into the administration account and worker account.
  • Have AWS Serverless Application Model (AWS SAM) and Docker installed in your development environment to build AWS Lambda packages

Create a Slack app and set up a channel

Set up Slack:

  1. Create a Slack app from the manifest template, using the content of the slack-app-manifest.json file from the GitHub repository.
  2. Install your app into your workspace, and take note of the Bot User OAuth Token value to be used in later steps.
  3. Take note of the Verification Token value under Basic Information of your app, you will need it in later steps.
  4. In your Slack desktop app, go to your workspace and add the newly created app.
  5. Create a Slack channel and add the newly created app as an integrated app to the channel.
  6. Find and take note of the channel ID by choosing (right-clicking) the channel name, choosing Additional options to access the More menu, and choosing Open details to see the channel details.

Prepare your deployment environment

Use the following commands to ready your deployment environment for the worker account. Make sure you aren’t running the command under an existing AWS CDK project root directory. This step is required only if you chose a worker account that’s different from the administration account:

# Make sure your shell session environment is configured to access the worker
# account of your choice, for detailed guidance on how to configure, refer to 
# https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html  
# Note that in this step you are bootstrapping your worker account in such a way 
# that your administration account is trusted to execute CloudFormation deployment in
# your worker account, the following command uses an example execution role policy of 'AdministratorAccess',
# you can swap it for other policies of your own for least privilege best practice,
# for more information on the topic, refer to https://repost.aws/knowledge-center/cdk-customize-bootstrap-cfntoolkit
cdk bootstrap aws://<replace with your AWS account id of the worker account>/<replace with the region where your worker services is> --trust <replace with your AWS account id of the administration account> --cloudformation-execution-policies 'arn:aws:iam::aws:policy/AdministratorAccess' --trust-for-lookup <replace with your AWS account id of the administration account>

Use the following commands to ready your deployment environment for the administration account. Make sure you aren’t running the commands under an existing AWS CDK project root directory:

# Make sure your shell session environment is configured to access the admistration 
# account of your choice, for detailed guidance on how to configure, refer to 
# https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
# Note 'us-east-1' region is required for receiving AWS Health events associated with
# services that operate in AWS global region.
cdk bootstrap <replace with your AWS account id of the administration account>/us-east-1

# Optional, if you have your cloud infrastructures hosted in other AWS regions than 'us-east-1',
# repeat the below commands for each region
cdk bootstrap <replace with your AWS account id of the administration account>/<replace with the region name, e.g. us-west-2>

Copy the GitHub repo to your local directory

Use the following code to copy the GitHub repo to your local directory.:

git clone https://github.com/aws-samples/ops-health-ai.git
cd ops-health-ai
npm install
cd lambda/src
# Depending on your build environment, you might want to change the arch type to 'x86'
# or 'arm' in lambda/src/template.yaml file before build 
sam build --use-container
cd ../..

Create an .env file

Create an .env file containing the following code under the project root directory. Replace the variable placeholders with your account information:

CDK_ADMIN_ACCOUNT=<replace with your 12 digits administration AWS account id>
CDK_PROCESSING_ACCOUNT=<replace with your 12 digits worker AWS account id. This account id is the same as the admin account id if using single account setup>
EVENT_REGIONS=us-east-1,<region 1 of where your infrastructures are hosted>,<region 2 of where your infrastructures are hosted>
CDK_PROCESSING_REGION=<replace with the region where you want the worker services to be, e.g. us-east-1>
EVENT_HUB_ARN=arn:aws:events:<replace with the worker service region>:<replace with the worker service account id>:event-bus/AiOpsStatefulStackAiOpsEventBus
SLACK_CHANNEL_ID=<your Slack channel ID noted down from earlier step>
SLACK_APP_VERIFICATION_TOKEN=<replace with your Slack app verification token>
SLACK_ACCESS_TOKEN=<replace with your Slack Bot User OAuth Token value>

Deploy the solution using the AWS CDK

Deploy the processing microservice to your worker account (the worker account can be the same as your administrator account):

  1. In the project root directory, run the following command: cdk deploy --all --require-approval never
  2. Capture the HandleSlackCommApiUrl stack output URL,
  3. Go to your Slack app and navigate to Event Subscriptions, Request URL Change,
  4. Update the URL value with the stack output URL and save your settings.

Test the solution

Test the solution by sending a mock operational event to your administration account . Run the following AWS Command Line Interface (AWS CLI) command:
aws events put-events --entries file://test-events/mockup-events.json

You will receive Slack messages notifying you about the mock event followed by automatic update from the AI assistant reporting the actions it took and the reasons for each action. You don’t need to manually choose Accept or Discharge for each event.

Try creating more mock events based on your past operational events and test them with the use cases described in the Use case examples section.

If you have just enabled AWS Security Hub in your administrator account, you might need to wait for up to 24 hours for any findings to be reported and acted on by the solution. AWS Health events, on the other hand, will be reported whenever applicable.

Clean up

To clean up your resources, run the following command in the CDK project directory: cdk destroy --all

Conclusion

This solution uses AI to help you automate complex tasks in cloud operational events management, bringing new opportunities for you to further streamline cloud operations management at scale with improved productivity, and operational resilience.

To learn more about the AWS services used in this solution, see:


About the author

Sean Xiaohai Wang is a Senior Technical Account Manager at Amazon Web Services. He helps enterpise customers build and operate efficiently on AWS.

Read More