ReALM: Reference Resolution as Language Modeling

Reference resolution is an important problem, one that is essential to understand and successfully handle contexts of different kinds. This context includes both previous turns and context that pertains to non-conversational entities, such as entities on the user’s screen or those running in the background. While LLMs have been shown to be extremely powerful for a variety of tasks, their use in reference resolution, particularly for non-conversational entities, remains underutilized. This paper demonstrates how LLMs can be used to create an effective system to resolve references of various…Apple Machine Learning Research

Novel-View Acoustic Synthesis From 3D Reconstructed Rooms

We investigate the benefit of combining blind audio recordings with 3D scene information for novel-view acoustic synthesis. Given audio recordings from 2-4 microphones and the 3D geometry and material of a scene containing multiple unknown sound sources, we estimate the sound anywhere in the scene. We identify the main challenges of novel-view acoustic synthesis as sound source localization, separation, and dereverberation. While naively training an end-to-end network fails to produce high-quality results, we show that incorporating room impulse responses (RIRs) derived from 3D reconstructed…Apple Machine Learning Research

RepCNN: Micro-Sized, Mighty Models for Wakeword Detection

Always-on machine learning models require a very low memory and compute footprint. Their restricted parameter count limits the model’s capacity to learn, and the effectiveness of the usual training algorithms to find the best parameters. Here we show that a small convolutional model can be better trained by first refactoring its computation into a larger redundant multi-branched architecture. Then, for inference, we algebraically re-parameterize the trained model into the single-branched form with fewer parameters for a lower memory footprint and compute cost. Using this technique, we show…Apple Machine Learning Research

Delight your customers with great conversational experiences via QnABot, a generative AI chatbot

Delight your customers with great conversational experiences via QnABot, a generative AI chatbot

QnABot on AWS (an AWS Solution) now provides access to Amazon Bedrock foundational models (FMs) and Knowledge Bases for Amazon Bedrock, a fully managed end-to-end Retrieval Augmented Generation (RAG) workflow. You can now provide contextual information from your private data sources that can be used to create rich, contextual, conversational experiences.

The advent of generative artificial intelligence (AI) provides organizations unique opportunities to digitally transform customer experiences. Enterprises with contact center operations are looking to improve customer satisfaction by providing self-service, conversational, interactive chat bots that have natural language understanding (NLU). Enterprises want to automate frequently asked transactional questions, provide a friendly conversational interface, and improve operational efficiency. In turn, customers can ask a variety of questions and receive accurate answers powered by generative AI.

In this post, we discuss how to use QnABot on AWS to deploy a fully functional chatbot integrated with other AWS services, and delight your customers with human agent like conversational experiences.

Solution overview

QnABot on AWS is an AWS Solution that enterprises can use to enable a multi-channel, multi-language chatbot with NLU to improve end customer experiences. QnABot provides a flexible, tiered conversational interface empowering enterprises to meet customers where they are and provide accurate responses. Some responses need to be exact (for example, regulated industries like healthcare or capital markets), some responses need to be searched from large, indexed data sources and cited, and some answers need to be generated on the fly, conversationally, based on semantic context. With QnABot on AWS, you can achieve all of the above by deploying the solution using an AWS CloudFormation template, with no coding required. The solution is extensible, uses AWS AI and machine learning (ML) services, and integrates with multiple channels such as voice, web, and text (SMS).

QnABot on AWS provides access to multiple FMs through Amazon Bedrock, so you can create conversational interfaces based on your customers’ language needs (such as Spanish, English, or French), sophistication of questions, and accuracy of responses based on user intent. You now have the capability to access various large language models (LLMs) from leading AI enterprises (such as Amazon Titan, Anthropic Claude 3, Cohere Command, Meta Llama 3, Mistal AI Large Model, and others on Amazon Bedrock) to find a model best suited for your use case. Additionally, native integration with Knowledge Bases for Amazon Bedrock allows you to retrieve specific, relevant data from your data sources via pre-built data source connectors (Amazon Simple Storage Service – S3, Confluence, Microsoft SharePoint, Salesforce, or web crawlers), and automatically converted to text embeddings stored in a vector database of your choice. You can then retrieve your company-specific information with source attribution (such as citations) to improve transparency and minimize hallucinations. Lastly, if you don’t want to set up custom integrations with large data sources, you can simply upload your documents and support multi-turn conversations. With prompt engineering, managed RAG workflows, and access to multiple FMs, you can provide your customers rich, human agent-like experiences with precise answers.

Deploying the QnABot solution builds the following environment in the AWS Cloud.

Figure 1: QnABot Architecture Diagram

The high-level process flow for the solution components deployed with the CloudFormation template is as follows:

  1. The admin deploys the solution into their AWS account, opens the Content Designer UI or Amazon Lex web client, and uses Amazon Cognito to authenticate.
  2. After authentication, Amazon API Gateway and Amazon S3 deliver the contents of the Content Designer UI.
  3. The admin configures questions and answers in the Content Designer and the UI sends requests to API Gateway to save the questions and answers.
  4. The Content Designer AWS Lambda function saves the input in Amazon OpenSearch Service in a questions bank index. If using text embeddings, these requests first pass through a LLM model hosted on Amazon Bedrock or Amazon SageMaker to generate embeddings before being saved into the question bank on OpenSearch Service.
  5. Users of the chatbot interact with Amazon Lex through the web client UI, Amazon Alexa, or Amazon Connect.
  6. Amazon Lex forwards requests to the Bot Fulfillment Lambda function. Users can also send requests to this Lambda function through Amazon Alexa devices.
  7. The user and chat information is stored in Amazon DynamoDB to disambiguate follow-up questions from previous question and answer context.
  8. The Bot Fulfillment Lambda function takes the user’s input and uses Amazon Comprehend and Amazon Translate (if necessary) to translate non-native language requests to the native language selected by the user during the deployment, and then looks up the answer in OpenSearch Service. If using LLM features such as text generation and text embeddings, these requests first pass through various LLM models hosted on Amazon Bedrock or SageMaker to generate the search query and embeddings to compare with those saved in the question bank on OpenSearch Service.
  9. If no match is returned from the OpenSearch Service question bank, then the Bot Fulfillment Lambda function forwards the request as follows:
    1. If an Amazon Kendra index is configured for fallback, then the Bot Fulfillment Lambda function forwards the request to Amazon Kendra if no match is returned from the OpenSearch Service question bank. The text generation LLM can optionally be used to create the search query and synthesize a response from the returned document excerpts.
    2. If a knowledge base ID is configured, the Bot Fulfillment Lambda function forwards the request to the knowledge base. The Bot Fulfillment Lambda function uses the RetrieveAndGenerate API to fetch the relevant results for a user query, augment the FM’s prompt, and return the response.
  10. User interactions with the Bot Fulfillment function generate logs and metrics data, which is sent to Amazon Kinesis Data Firehose and then to Amazon S3 for later data analysis.
  11. OpenSearch Dashboards can be used to view usage history, logged utterances, no hits utterances, positive user feedback, and negative user feedback, and also provides the ability to create custom reports.

Prerequisites

To get started, you need the following:

  • An AWS account
  • An active deployment of QnABot on AWS (version 6.0.0 or later)
  • Amazon Bedrock model access (required) for all embeddings and LLM models that will be used in QnABot

Figure 2: Request Access to Bedrock Foundational Models (FMs)

In the following sections, we explore some of QnABot’s generative AI features.

Semantic question matching using an embeddings LLM

QnABot on AWS can use text embeddings to provide semantic search capabilities by using LLMs. The goal of this feature is to improve question matching accuracy while reducing the amount of tuning required when compared to the default OpenSearch Service keyword-based matching.

Some of the benefits include:

  • Improved FAQ accuracy from semantic matching vs. keyword matching (comparing the meaning vs. comparing individual words)
  • Fewer training utterances required to match a diverse set of queries
  • Better multi-language support, because translated utterances only need to match the meaning of the stored text, not the wording

Configure Amazon Bedrock to enable semantic question matching

To enable these expanded semantic search capabilities, QnABot uses an Amazon Bedrock FM to generate text embeddings provided using the EmbeddingsBedrockModelId CloudFormation stack parameter. These models provide the best performance and operate on a pay-per-request model. At the time of writing, the following embeddings models are supported by QnABot on AWS:

For the CloudFormation stack, set the following parameters:

  • Set EmbeddingsAPI to BEDROCK
  • Set EmbeddingsBedrockModelId to one of the available options

For example, with semantic matching enabled, the question “What’s the address of the White House?” matches to “Where does the President live?” This example doesn’t match using keywords because they don’t share any of the same words.

Semantic matching in QnABot

Figure 3: Semantic matching in QnABot

In the UI designer, you can set ENABLE_DEBUG_RESPONSE to true to see the user input, source, or any errors of the answer, as illustrated in the preceding screenshot.

You can also evaluate the matching score on the TEST tab in the content designer UI. In this example, we add a match on “qna item question” with the question “Where does the President live?”

Test and evaluate answer

Figure 4: Test and evaluate answers in QnABot

Similarly, you can try a match on “item text passage” with the question “Where did Humpty Dumpty sit?”

Match items or text passages

Figure 5: Match items or text passages in QnABot

Recommendations for tuning with an embeddings LLM

When using embeddings in QnABot, we recommend generalizing questions because more user utterances will match a general statement. For example, the embeddings LLM model will cluster “checking” and “savings” with “account,” so if you want to match both account types, use “account” in your questions.

Similarly, for the question and utterance of “transfer to an agent,” consider using “transfer to someone” because it will better match with “agent,” “representative,” “human,” “person,” and so on.

In addition, we recommend tuning EMBEDDINGS_SCORE_THRESHOLD, EMBEDDINGS_SCORE_ANSWER_THRESHOLD, and EMBEDDINGS_TEXT_PASSAGE_SCORE_THRESHOLD based on the scores. The default values are generalized to all multiple models, but you might need to modify this based on embeddings model and your experiments.

Text generation and query disambiguation using a text LLM

QnABot on AWS can use LLMs to provide a richer, more conversational chat experience. The goal of these features is to minimize the amount of individually curated answers administrators are required to maintain, improve question matching accuracy by providing query disambiguation, and enable the solution to provide more concise answers to users, especially when using a knowledge base in Amazon Bedrock or the Amazon Kendra fallback feature.

Configure an Amazon Bedrock FM with AWS CloudFormation

To enable these capabilities, QnABot uses one of the Amazon Bedrock FMs to generate text embeddings provided using the LLMBedrockModelId CloudFormation stack parameter. These models provide the best performance and operate on a pay-per-request model.

For the CloudFormation stack, set the following parameters:

  • Set LLMApi to BEDROCK
  • Set LLMBedrockModelId to one of the available LLM options
Setup QnABot to use Bedrock FMs

Figure 6: Setup QnABot to use Bedrock FMs

Query disambiguation (LLM-generated query)

By using an LLM, QnABot can take the user’s chat history and generate a standalone question for the current utterance. This enables users to ask follow-up questions that on their own may not be answerable without context of the conversation. The new disambiguated, or standalone, question can then be used as search queries to retrieve the best FAQ, passage, or Amazon Kendra match.

In QnABot’s Content Designer, you can further customize the prompt and model listed in the Query Matching section:

  • LLM_GENERATE_QUERY_PROMPT_TEMPLATE – The prompt template used to construct a prompt for the LLM to disambiguate a follow-up question. The template may use the following placeholders:
    • history – A placeholder for the last LLM_CHAT_HISTORY_MAX_MESSAGES messages in the conversational history, to provide conversational context.
    • input – A placeholder for the current user utterance or question.
  • LLM_GENERATE_QUERY_MODEL_PARAMS – The parameters sent to the LLM model when disambiguating follow-up questions. Refer to the relevant model documentation for additional values that the model provider accepts.

The following screenshot shows an example with the new LLM disambiguation feature enabled, given the chat history context after answering “Who was Little Bo Peep” and the follow-up question “Did she find them again?”

Use LLMs to disambiguate queries

Figure 7: LLM query disambiguation feature enabled

QnABot rewrites that question to provide all the context required to search for the relevant FAQ or passage: “Did Little Bo Peep find her lost sheep again?”

Query disambiguation with LLMs

Figure 8: With query disambiguation with LLMs, context is maintained

Answer text generation using QnABot

You can now generate answers to questions from context provided by knowledge base search results, or from text passages created or imported directly into QnABot. This allows you to generate answers that reduce the number of FAQs you have to maintain, because you can now synthesize concise answers from your existing documents in a knowledge base, Amazon Kendra index, or document passages stored in QnABot as text items. Additionally, your generated answers can be concise and therefore suitable for voice or contact center chatbots, website bots, and SMS bots. Lastly, these generated answers are compatible with the solution’s multi-language support—customers can interact in their chosen languages and receive generated answers in the same language.

With QnABot, you can use two different data sources to generate responses: text passages or a knowledge base in Amazon Bedrock.

Generate answers to questions from text passages

In the content designer web interface, administrators can store full text passages for QnABot on AWS to use. When a question gets asked that matches against this passage, the solution can use LLMs to answer the user’s question based on information found within the passage. We highly recommend you use this option with semantic question matching using Amazon Bedrock text embedding. In QnABot content designer, you can further customize the prompt and model listed under Text Generation using the General Settings section.

Let’s look at a text passage example:

  1. In the Content Designer, choose Add.
  2. Select the text, enter an item ID and a passage, and choose Create.

You can also import your passages from a JSON file using the Content Designer Import feature. On the tools menu, choose Import, open Examples/Extensions, and choose LOAD next to TextPassage-NurseryRhymeExamples to import two nursery rhyme text items.

The following example shows QnABot generating an answer using a text passage item that contains the nursery rhyme, in response to the question “Where did Humpty Dumpty sit?”

Generate answers from text passages

Figure 9: Generate answers from text passages

You can also use query disambiguation and text generation together, by asking “Who tried to fix Humpty Dumpty?” and the follow-up question “Did they succeed?”

Text generation with query disambiguation

Figure 10: Text generation with query disambiguation to maintain context

You can also modify LLM_QA_PROMPT_TEMPLATE in the Content Designer to answer in different languages. In the prompt, you can specify the prompt and answers in different languages (e.g. prompts in French, Spanish).

Answer in different languages

Figure 11: Answer in different languages

You can also specify answers in two languages with bulleted points.

Answer in multiple languages

Figure 12: Answer in multiple languages

RAG using an Amazon Bedrock knowledge base

By integrating with a knowledge base, QnABot on AWS can generate concise answers to users’ questions from configured data sources. This prevents the need for users to sift through larger text passages to find the answer. You can also create your own knowledge base from files stored in an S3 bucket. Amazon Bedrock knowledge bases with QnABot don’t require EmbeddingsApi and LLMApi because the embeddings and generative response are already provided by the knowledge base. To enable this option, create an Amazon Bedrock knowledge base and use your knowledge base ID for the CloudFormation stack parameter BedrockKnowledgeBaseId.

To configure QnABot to use the knowledge base, refer to Create a knowledge base. The following is a quick setup guide to get started:

  1. Provide your knowledge base details.
Setup Amazon Bedrock Knowledge Base

Figure 13: Setup Amazon Bedrock Knowledge Base for RAG use cases

  1. Configure your data source based on the available options. For this example, we use Amazon S3 as the data source and note that the bucket has to be prepended with qna or QNA.
Setup data sources for Knowledge Base

Figure 14: Setup your RAG data sources for Amazon Knowledge Base

  1. Upload your documents to Amazon S3. For this example, we uploaded the aws-overview.pdf whitepaper to test integration.
  2. Create or choose your vector database store to allow Bedrock to store, update and manage embeddings.
  3. Sync the data source and use your knowledge base ID for the CloudFormation stack parameter BedrockKnowledgeBaseId.
Complete setting up Amazon Bedrock Knowledge Base

Figure 15: Complete setting up Amazon Bedrock Knowledge Base for your RAG use cases

In QnABot Content Designer, you can customize additional settings list under Text Generation using RAG with the Amazon Bedrock knowledge base.

QnABot on AWS can now answer questions from the AWS whitepapers, such as “What services are available in AWS for container orchestration?” and “Are there any upfront fees with ECS?”

Generate answers from your Amazon Bedrock Knowledge Base

Figure 16: Generate answers from your Amazon Bedrock Knowledge Base (RAG)

Conclusion

Customers expect quick and efficient service from enterprises in today’s fast-paced world. But providing excellent customer experience can be significantly challenging when the volume of inquiries outpaces the human resources employed to address them. Companies of all sizes can use QnABot on AWS with built-in Amazon Bedrock integrations to provide access to many market leading FMs, provide specialized lookup needs using RAG to reduce hallucinations, and provide a friendly AI conversational experience. With QnABot on AWS, you can provide high-quality natural text conversations, content management, and multi-turn dialogues. The solution comes with one-click deployment for custom implementation, a content designer for Q&A management, and rich reporting. You can also integrate with contact center systems like Amazon Connect and Genesys Cloud CX. Get started with QnABot on AWS.


About the Author

Ajay Swamy is the Product Leader for Data, ML and Generative AI AWS Solutions. He specializes in building AWS Solutions (production-ready software packages) that deliver compelling value to customers by solving for their unique business needs. Other than QnABot on AWS, he manages Generative AI Application BuilderEnhanced Document UnderstandingDiscovering Hot Topics using Machine Learning and other AWS Solutions. He lives with his wife and dog (Figaro), in New York, NY.

Abhishek Patil is a Software Development Engineer at Amazon Web Services (AWS) based in Atlanta, GA, USA. With over 7 years of experience in the tech industry, he specializes in building distributed software systems, with a primary focus on Generative AI and Machine Learning. Abhishek is a primary builder on AI solution QnABot on AWS and has contributed to other AWS Solutions including Discovering Hot Topics using Machine Learning and OSDU® Data Platform. Outside of work, Abhishek enjoys spending time outdoors, reading, resistance training, and practicing yoga.

Read More

GeForce NOW and CurseForge Bring Mod Support to ‘World of Warcraft: The War Within’ in the Cloud

GeForce NOW and CurseForge Bring Mod Support to ‘World of Warcraft: The War Within’ in the Cloud

Time to be wowed: GeForce NOW members can now stream World of Warcraft on supported devices with in-game mods powered by the CurseForge platform for WoW customization. With support for top mods, even the most hardcore raid leaders can play like a hero, thanks to the cloud.

Embark on a new adventure in Azeroth when the upcoming World of Warcraft expansion, The War Within, launches on Monday, Aug. 26, at 3 p.m. PT. GeForce NOW members who purchase the Epic Edition of The War Within will get early streaming access on Thursday, Aug. 22, at 3 p.m. PT.

And check out the five new games available in the ever-expanding GeForce NOW library this week, including the Psychonauts series from Double Fine Productions.

For those looking to upgrade their play or try out GeForce NOW for the first time, the Summer Sale is offering new one- and six-month Ultimate and Priority memberships at 50% off through this Sunday, Aug. 18.

Play Your Way

Return to World of Warcraft in style across supported devices with GeForce NOW and support for top CurseForge Addons. One of the most popular platforms for WoW Addons, CurseForge includes user interface customization, combat Addons, action bars, quest helpers and many more categories.

GeForce NOW has collaborated with CurseForge to include over 25 of the top Addons from its platform, available to Ultimate and Priority members — including Day Pass users — to seamlessly customize their WoW experience. The Addons are just as easy to implement as they would be while playing locally on a PC gaming rig, and a CurseForge account isn’t required — just launch WoW and enable Addons from the game’s menu.

CurseForge WoW Addson on GeForce NOW
Customize and conquer.

After clicking the new “Addons” button from the in-game menu, paying GeForce NOW members can choose the Addons they’d like to enable or opt to enable all. After that, CurseForge will ensure Addons update automatically, and GeForce NOW will remember the Addons selected on each game launch. Check out this article for more details.

Stream World of Warcraft and any mods that normally only work on PC across devices that the game is supported on, including SHIELD TVs, underpowered laptops, Chromebooks and handheld devices like the Steam Deck. Addons will work across all supported WoW experiences on GeForce NOW, including Classic and Cataclysm Classic. From ancient wonders to perilous dungeons, GeForce NOW is the best way for game veterans and newcomers alike to experience unparalleled adventure in the heart of Azeroth.

Members can show off their World of Warcraft Addons in the cloud by sharing a screenshot on social media using #ModsonGFN for a chance to be featured on GeForce NOW’s channels.

Dive to New Depths

Get ready to embark on a journey to the heart of Azeroth in The War Within and unveil the mysteries lying beneath the world’s surface, with early streaming access for those who purchase the Epic Edition. GeForce NOW members will be able to jump right in without waiting around for game updates.

The War Within coming soon to GeForce NOW
Dig deep.

Experience the thrill of Warbands and an account-wide progression system, and soar through the skies with the new Skyriding feature. Dive into the Radiant Echoes event, collect Residual Memories and gear up with new items for Warband collections. With class and system updates, this expansion sets the stage for the epic tales and battles that await in the depths of Azeroth. Answer the call to arms when the saga of The War Within begins.

The full update for The War Within promises to delve deeper into the unexplored depths of Azeroth, continuing the story and expanding the gameplay experience. Look forward to new zones, dungeons and raids, as well as class and system updates — all at high performance, streaming on GeForce NOW.

Mind Over Matter

Psychonauts 2 on GeForce NOW
The mind is a dangerous place.

The Psychonauts franchise is the beloved action-adventure platformer from Double Fine Productions. Follow the story of Razputin “Raz” Aquato, a young psychic who runs away from the circus to join a summer camp for psychic spies-in-training. Experience unique level design and explore the minds of various characters, each filled with imaginative and often bizarre landscapes that reflect their psychological states.

Psychonauts 2 on GeForce NOW
Brains over brawn.

In Psychonauts 2, Raz is a full-fledged Psychonaut embarking on his first official mission. The sequel delves deeper into Raz’s family background and the workings of the Psychonauts organization. Explore intricately designed mental worlds, all while engaging in platforming and puzzle-solving gameplay.

With a GeForce NOW Ultimate or Priority account, the mind-bending world of Psychonauts is just a click away. Dive in, explore the depths of the human psyche and uncover the secrets that lie within.

Get With the Newness

The Hunt: Showdown 1896 on GeForce NOW
The hunt is on.

Hunt: Showdown 1896 from Crytek marks a new era for the high-stakes, tactical first-person extraction game. The new PC update, available for members today, moves from the original game’s swamps of Louisiana to the sprawling mountains of Colorado. The brand-new Mammon’s Gulch map brings mountains, stunning vistas and grueling mines to the gaming experience, providing all-new elevation points and strategic angles for players to hide out and take out enemy Hunters.

In addition, members can look for the following:

  • Level Zero: Extraction (New release on Steam, Aug. 13)
  • shapez 2 (New release on Steam, Aug. 15)
  • Car Manufacture (Steam)
  • Psychonauts (Steam)
  • Psychonauts 2 (Steam and Xbox, available on PC Game Pass)

What are you planning to play this weekend? Let us know on X or in the comments below.

Read More

Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

Introducing document-level sync reports: Enhanced data sync visibility in Amazon Q Business

Amazon Q Business is a fully managed, generative artificial intelligence (AI)-powered assistant that helps enterprises unlock the value of their data and knowledge. With Amazon Q, you can quickly find answers to questions, generate summaries and content, and complete tasks by using the information and expertise stored across your company’s various data sources and enterprise systems. At the core of this capability are native data source connectors that seamlessly integrate and index content from multiple repositories into a unified index. This enables the Amazon Q large language model (LLM) to provide accurate, well-written answers by drawing from the consolidated data and information. The data source connectors act as a bridge, synchronizing content from disparate systems like Salesforce, Jira, and SharePoint into a centralized index that powers the natural language understanding and generative abilities of Amazon Q.

Customers appreciate that Amazon Q Business securely connects to over 40 data sources. While using their data source, they want better visibility into the document processing lifecycle during data source sync jobs. They want to know the status of each document they attempted to crawl and index, as well as the ability to troubleshoot why certain documents were not returned with the expected answers. Additionally, they want access to metadata, timestamps, and access control lists (ACLs) for the indexed documents.

We are pleased to announce a new feature now available in Amazon Q Business that significantly improves visibility into data source sync operations. The latest release introduces a comprehensive document-level report incorporated into the sync history, providing administrators with granular indexing status, metadata, and ACL details for every document processed during a data source sync job. This enhancement to sync job observability enables administrators to quickly investigate and resolve ingestion or access issues encountered while setting up an Amazon Q Business application. The detailed document reports are persisted in the new SYNC_RUN_HISTORY_REPORT log stream under the Amazon Q Business application log group, so critical sync job details are available on-demand when troubleshooting.

Lifecycle of a document in a data source sync run job

In this section, we examine the lifecycle of a document within a data source sync in Amazon Q Business. This provides valuable insight into the sync process. The data source sync comprises three key stages: crawling, syncing, and indexing. Crawling involves the connector connecting to the data source and extracting documents meeting the defined sync scope according to the data source configuration. These documents are then synced to Amazon Q Business during the syncing phase. Finally, indexing makes the synced documents searchable within the Amazon Q Business environment.

The following diagram shows a flowchart of a sync run job.

Crawling stage

The first stage is the crawling stage, where the connector crawls all documents and their metadata from the data source. During this stage, the connector also compares the checksum of the document against the Amazon Q index to figure out if a particular document needs to be added, modified, or deleted from the index. This operation corresponds to the CrawlAction field in the sync run history report.

If the document is unmodified, it is marked as UNMODIFIED and skipped in the rest of the stages. If any document fails in the crawling stage, for example due to throttling errors, broken content, or if the document size is too big, that document is marked as failed in the sync run history report with the CrawlStatus as FAILED. If the document was skipped due to any validation errors, its CrawlStatus is marked as SKIPPED. These documents are not sent forward to the next stage. All successful documents are marked as SUCCESS and are sent forward.

We also capture the ACLs and metadata on each document in this stage to be able to add it to the sync run history report.

Syncing stage

During the syncing stage, the document is sent to Amazon Q Business ingestion service APIs like BatchPutDocument and BatchDeleteDocument. After a document is submitted to these APIs, Amazon Q Business runs validation checks on the submitted documents. If any document fails these checks, its SyncStatus is marked as FAILED. If there is an irrecoverable error for a particular document, it is marked as SKIPPED and other documents are sent forward.

Indexing stage

In this step, Amazon Q Business parses the document, processes it according to its content type, and persists it in the index. If the document fails to be persisted, its IndexStatus is marked as FAILED; otherwise, it is marked as SUCCESS.

After the statuses of all the stages have been captured, we emit these statuses as an Amazon Cloudwatch event to the customer’s AWS account.

Key features and benefits of document-level reports

The following are the key features and benefits of the new document level report in Amazon Q Business applications:

  • Enhanced sync run history page – A new Actions column has been added to the sync run history page, providing access to the document-level report for each sync run.
  • Dedicated log stream – A new log stream named SYNC_RUN_HISTORY_REPORT has been created in the Amazon Q Business CloudWatch log group, containing the document-level report.
  • Comprehensive document information – The document-level report includes the following information for each document.
  • Document ID – This is the document ID that is inherited directly from the data source or mapped by the customer in the data source field mappings.
  • Document title – The title of the document is taken from the data source or mapped by the customer in the data source field mappings.
  • Consolidated document status (SUCCESS, FAILED, or SKIPPED) – This is the final consolidated status of the document. It can have a value of SUCCESS, FAILED, or SKIPPED. If the document was successfully processed in all stages, then the value is SUCCESS. If the document has failed or was skipped in any of the stages, then the value of this field will be FAILED or SKIPPED.
  • Error message (if the document failed) – This field contains the error message with which a document failed. If a document was skipped due to throttling errors, or any internal errors, this will be shown in the error message field.
  • Crawl status – This field denotes whether the document was crawled successfully from the data source. This status correlates to the syncing-crawling state in the data source sync.
  • Sync status – This field denotes whether the document was sent for syncing successfully. This correlates to the syncing-indexing state in the data source sync.
  • Index status – This field denotes whether the document was successfully persisted in the index.
  • ACLs – This field contains a list of document-level permissions that were crawled from the data source. The details of each element in the list are:
    • Global name: This is the email/username of the user. This field is mapped across multiple data sources. For example, if a user has 3 data sources – Confluence, Sharepoint and Gmail with the local user ID as confluence_user, sharepoint_user and gmail_user respectively, and their email address user@email.com is the globalName in the ACL for all of them; then Amazon Q Business understands that all of these local user IDs map to the same global name.
    • Name: This is the local unique ID of the user which is assigned by the data source.
    • Type: This field indicates the principal type. This can be either USER or GROUP.
    • Is Federated: This is a boolean flag which indicates whether the group is of INDEX level (true) or DATASOURCE level (false).
    • Access: This field indicates whether the user has access allowed or denied explicitly. Values can be either ALLOWED or DENIED.
    • Data source ID: This is the data source ID. For federated groups (INDEX level), this field will be null.
  • Metadata – This field contains the metadata fields (other than ACL) that were pulled from the data source. This list also includes the metadata fields mapped by the customer in the data source field mappings as well as extra metadata fields added by the connector.
  • Hashed document ID (for troubleshooting assistance) – To safeguard your data privacy, we present a secure, one-way hash of the document identifier. This encrypted value enables the Amazon Q Business team to efficiently locate and analyze the specific document within our logs, should you encounter any issue that requires further investigation and resolution.
  • Timestamp – The timestamp indicates when the document status was logged in CloudWatch.

In the following sections, we explore different use cases for the logging feature.

Troubleshoot “Sorry, I could not find relevant information” with the new logging feature

The new document-level logging feature in Amazon Q Business can help troubleshoot common issues related to the “Sorry, I could not find relevant information to complete your request” response.

Let’s explore an example scenario. A mutual funds manager uses Amazon Q Business chat for knowledge retrieval and insights extraction across their enterprise data stores. When the fund manager asks, “What is the CAGR of the multi-asset fund?” in the Amazon Q chat, they receive the “Sorry, I could not find relevant information to complete your request” response.

As the administrator managing their Amazon Q Business application, you can troubleshoot the issue using the following approach with the new logging feature. First, you want to determine whether the multi-asset fund document was successfully indexed in the Amazon Q Business application. Next, you need to verify if the fund manager’s user account has the required permission to read the information from the multi-asset fund document. Amazon Q Business enforces the document permissions configured in its data source, and you can use this new feature to verify that the document ACL settings are synced in the Amazon Q Business application index.

You can use the following CloudWatch query string to check the document ACL settings:

filter @logStream like 'SYNC_RUN_HISTORY_REPORT/' 
and DocumentTitle = "your-document-title"
| fields DocumentTitle, ConnectorDocumentStatus.Status, Acl
| sort @timestamp desc
| limit 1

This query filter uses the per-document-level logging stream SYNC_RUN_HISTORY_REPORT, and displays the document title and its associated ACL settings. By verifying the document indexing and permissions, you can identify and resolve potential issues that may be causing the “Sorry, I could not find relevant information” response.

The following screenshot shows an example result.

Determine the optimal boosting duration for recent documents in using document-level reporting

When it comes to generating accurate answers, you may want to fine-tune the way Amazon Q prioritizes its content. For instance, you may prefer to boost recent documents over older ones to make sure the most up-to-date passages are used to generate an answer. To achieve this, you can use the business’s relevance tuning feature in Amazon Q Business to boost documents based on the last update date attribute, with a specified boosting duration. However, determining the optimal boosting period can be challenging when dealing with a large number of frequently changing documents.

You can now use the per-document-level report to obtain the _last_updated_at metadata field information for your documents, which can help you determine the appropriate boosting period. For this, you use the following CloudWatch Logs Insights query to retrieve the _last_updated_at metadata attribute for machine learning documents from the SYNC_RUN_HISTORY_REPORT log stream:

filter @logStream like 'SYNC_RUN_HISTORY_REPORT/' 
and Metadata like 'Machine Learning'
| parse Metadata '{"key":"_last_updated_at","value":{"dateValue":"*"}}' as @last_updated_at
| sort @last_updated_at desc, @timestamp desc
| dedup DocumentTitle

With the preceding query, you can gain insights into the last updated timestamps of your documents, enabling you to make informed decisions about the optimal boosting period. This approach makes sure your chat responses are generated using the most recent and relevant information, enhancing the overall accuracy and effectiveness of your Amazon Q Business implementation.

The following screenshot shows an example result.

Common document indexing observability and troubleshooting methods

In this section, we explore some common admin tasks for observing and troubleshooting document indexing using the new document-level reporting feature.

List all successfully indexed documents from a data source

To retrieve a list of all documents that have been successfully indexed from a specific data source, you can use the following CloudWatch query:

fields DocumentTitle, DocumentId, @timestamp
| filter @logStream like 'SYNC_RUN_HISTORY_REPORT/your-data-source-id/'
and ConnectorDocumentStatus.Status = "SUCCESS"
| sort @timestamp desc | dedup DocumentTitle, DocumentId

The following screenshot shows an example result. 

List all successfully indexed documents from a data source sync job

To retrieve a list of all documents that have been successfully indexed during a specific sync job, you can use the following CloudWatch query:

fields DocumentTitle, DocumentId, ConnectorDocumentStatus.Status AS IndexStatus, @timestamp
| filter @logStream like 'SYNC_RUN_HISTORY_REPORT/your-data-source-id/run-id'
and ConnectorDocumentStatus.Status = "SUCCESS"
| sort DocumentTitle

The following screenshot shows an example result.

List all failed indexed documents from a data source sync job

To retrieve a list of all documents that failed to index during a specific sync job, along with the error messages, you can use the following CloudWatch query:

fields DocumentTitle, DocumentId, ConnectorDocumentStatus.Status AS IndexStatus, ErrorMsg, @timestamp
| filter @logStream like 'SYNC_RUN_HISTORY_REPORT/your-data-source-id/run-id'
and ConnectorDocumentStatus.Status = "FAILED"
| sort @timestamp desc

The following screenshot shows an example result.

List all documents that contains a particular user name ACL permission from an Amazon Q Business application

To retrieve a list of documents that have a specific user’s ACL permission, you can use the following CloudWatch Logs Insights query:

filter @logStream like 'SYNC_RUN_HISTORY_REPORT/' 
and Acl like 'aneesh@mydemoaws.onmicrosoft.com'
| display DocumentTitle, SourceUri

The following screenshot shows an example result.

 List the ACL of an indexed document from a data source sync job

To retrieve the ACL information for a specific indexed document from a sync job, you can use the following CloudWatch Logs Insights query:

filter @logStream like 'SYNC_RUN_HISTORY_REPORT/data-source-id/run-id' 
and DocumentTitle = "your-document-title"
| display DocumentTitle, Acl

The following screenshot shows an example result.

List metadata of an indexed document from a data source sync job

To retrieve the metadata information for a specific indexed document from a sync job, you can use the following CloudWatch Logs Insights query:

filter @logStream like 'SYNC_RUN_HISTORY_REPORT/data-source-id/run-id' 
and DocumentTitle = "your-document-title"
| display DocumentTitle, Metadata

The following screenshot shows an example result.

Conclusion

The newly introduced document-level report in Amazon Q Business provides enhanced visibility and observability into the document processing lifecycle during data source sync jobs. This feature addresses a critical need expressed by customers for better troubleshooting capabilities and access to detailed information about the indexing status, metadata, and ACLs of individual documents.

The document-level report is stored in a dedicated log stream named SYNC_RUN_HISTORY_REPORT within the Amazon Q Business application CloudWatch log group. This report contains comprehensive information for each document, including the document ID, title, overall document sync status, error messages (if any), along with its ACLs, and metadata information retrieved from the data sources. The data source sync run history page now includes an Actions column, providing access to the document-level report for each sync run. This feature significantly improves the ability to troubleshoot issues related to document ingestion and access control, and issues related to metadata relevance, and provides better visibility about the documents synced with an Amazon Q index.

To get started with Amazon Q Business, explore the Getting started guide. To learn more about data source connectors and best practices, see Configuring Amazon Q Business data source connectors.


About the authors

Aneesh Mohan is a Senior Solutions Architect at Amazon Web Services (AWS), bringing two decades of experience in creating impactful solutions for business-critical workloads. He is passionate about technology and loves working with customers to build well-architected solutions, focusing on the financial services industry, AI/ML, security, and data technologies.

Ashwin Shukla is a Software Development Engineer II on the Amazon Q for Business and Amazon Kendra engineering team, with 6 years of experience in developing enterprise software. In this role, he works on designing and developing foundational features for Amazon Q for Business.

Read More

Research Focus: Week of August 12, 2024

Research Focus: Week of August 12, 2024

Welcome to Research Focus, a series of blog posts that highlights notable publications, events, code/datasets, new hires and other milestones from across the research community at Microsoft.

Research Focus: August 5, 2024

Register now for Research Forum on September 3

Discover what’s next in the world of AI at Microsoft Research Forum (opens in new tab), an event series that explores recent research advances, bold new ideas, and important discussions with the global research community. 

In Episode 4, you’ll learn about the latest multimodal AI models, advanced benchmarks for AI evaluation and model self-improvement, and an entirely new kind of computer for AI inference and hard optimization. Discover how these research breakthroughs and more can help advance everything from weather prediction to materials design.

Your one-time registration includes access to our live chat with researchers on the event day and additional resources to dive into the research.

Episode 4 will air Tuesday, September 3 at 9:00 AM Pacific Time.

Microsoft research podcast

Ideas: Designing AI for people with Abigail Sellen

Social scientist and HCI expert Abigail Sellen explores the critical understanding needed to build human-centric AI through the lens of the new AICE initiative, a collective of interdisciplinary researchers studying AI impact on human cognition and the economy.


Towards Effective AI Support for Developers: A Survey of Desires and Concerns

Talking to customers provides important insights into their challenges as well as what they love. This helps identify innovative and creative ways of solving problems (without creating new ones) and guards against ruining workflows that customers actually like. However, many AI-related development tools are currently being built without consulting developers. 

In a recent paper: Towards Effective AI Support for Developers: A Survey of Desires and Concerns, researchers from Microsoft explore developers’ perspectives on AI integration in their workflows. This study reveals developers’ top desires for AI assistance along with their major concerns. The findings of this comprehensive survey among 791 Microsoft developers help the researchers identify key areas where AI can enhance productivity and how to address developers’ concerns. The findings provide actionable insights for product teams and leaders to create AI tools that truly support developers’ needs.


SuperBench: Improving Cloud AI Infrastructure Reliability with Proactive Validation

Cloud service providers have used geographical redundancies in hardware to ensure availability of their cloud infrastructure for years. However, for AI workloads, these redundancies can inadvertently lead to hidden degradation, also known as “gray failure.” This can reduce end-to-end performance and conceal performance issues, which complicates root cause analysis for failures and regressions.

In a recent paper: SuperBench: Improving Cloud AI Infrastructure Reliability with Proactive Validation (opens in new tab), Microsoft researchers and Azure cloud engineers introduce a proactive validation system specifically for AI infrastructure that mitigates hidden degradation caused by hardware redundancies . The paper, which won a “best paper” award at USENIX ATC (opens in new tab), outlines SuperBench’s comprehensive benchmark suite, capable of evaluating individual hardware components and representing most real AI workloads. It includes a validator, which learns benchmark criteria to clearly pinpoint defective components, and a selector, which balances validation time and issue-related penalties, enabling optimal timing for validation execution with a tailored subset of benchmarks. Testbed evaluation and simulation show SuperBench can increase the mean time between incidents by up to 22.61x. SuperBench has been successfully deployed in Azure production, validating hundreds of thousands of GPUs over the last two years.


Virtual Voices: Exploring Individual Differences in Written and Verbal Participation in Meeting

A key component of team performance is participation among group members. Workplace meetings provide a common stage for such participation. But with the shift to remote work, many meetings are conducted virtually. In such meetings, chat offers an alternate avenue of participation, in which attendees can synchronously contribute to the conversation through writing.

In a recent paper: Virtual Voices: Exploring Individual Differences in Written and Verbal Participation in Meetings (opens in new tab), researchers from Microsoft and external colleagues explore factors influencing participation in virtual meetings, drawing on individual differences (status characteristics theory), psychological safety perceptions, and group communication. Results of the paper, published in the Journal of Vocational Behavior (opens in new tab), reveal gender (self-identified) and job level nuances. Women engaged more in chat, while men verbally participated more frequently, as measured using meeting telemetry. Further, men highest in job level verbally contributed the most in virtual meetings, whereas women highest in job level use the chat the most frequently. Regarding type of chats sent, women use emoji reactions more often than men, and men send more attachments than women. Additionally, results revealed psychological safety moderated the relationship between job level and overall chat participation, such that employees low in job level with high perceptions of psychological safety sent more chats than their counterparts. This study provides insights into communication patterns and the impact of psychological safety on participation in technology-mediated spaces. 


The post Research Focus: Week of August 12, 2024 appeared first on Microsoft Research.

Read More

Derive generative AI-powered insights from ServiceNow with Amazon Q Business

Derive generative AI-powered insights from ServiceNow with Amazon Q Business

Effective customer support, project management, and knowledge management are critical aspects of providing efficient customer relationship management. ServiceNow is a platform for incident tracking, knowledge management, and project management functions for software projects and has become an indispensable part of many organizations’ workflows to ensure success of the customer and the product. However, extracting valuable insights from the vast amount of data stored in ServiceNow often requires manual effort and building specialized tooling. Users such as support engineers, project managers, and product managers need to be able to ask questions about an incident or a customer, or get answers from knowledge articles in order to provide excellent customer support. Organizations use ServiceNow to manage workflows, such as IT services, ticketing systems, configuration management, and infrastructure changes across IT systems. Generative artificial intelligence (AI) provides the ability to take relevant information from a data source such as ServiceNow and provide well-constructed answers back to the user.

Building a generative AI-based conversational application integrated with relevant data sources requires an enterprise to invest time, money, and people. First, you need to build connectors to the data sources. Next, you need to index this data to make it available for a Retrieval Augmented Generation (RAG) approach, where relevant passages are delivered with high accuracy to a large language model (LLM). To do this, you need to select an index that provides the capabilities to index the content for semantic and vector search, build the infrastructure to retrieve and rank the answers, and build a feature-rich web application. Additionally, you need to hire and staff a large team to build, maintain, and manage such a system.

Amazon Q Business is a fully managed generative AI-powered assistant that can answer questions, provide summaries, generate content, and securely complete tasks based on data and information in your enterprise systems. Amazon Q Business can help you get fast, relevant answers to pressing questions, solve problems, generate content, and take action using the data and expertise found in your company’s information repositories, code, and enterprise systems (such as ServiceNow, among others). Amazon Q provides out-of-the-box native data source connectors that can index content into a built-in retriever and uses an LLM to provide accurate, well-written answers. A data source connector is a component of Amazon Q that helps integrate and synchronize data from multiple repositories into one index.

Amazon Q Business offers multiple prebuilt connectors to a large number of data sources, including ServiceNow, Atlassian Confluence, Amazon Simple Storage Service (Amazon S3), Microsoft SharePoint, Salesforce, and many more, and helps you create your generative AI solution with minimal configuration. For a full list of Amazon Q business supported data source connectors, see Amazon Q Business connectors.

You can use the Amazon Q Business ServiceNow Online data source connector to connect to the ServiceNow Online platform and index ServiceNow entities such as knowledge articles, Service Catalogs, and incident entries, along with the metadata and document access control lists (ACLs).

This post shows how to configure the Amazon Q ServiceNow connector to index your ServiceNow platform and take advantage of generative AI searches in Amazon Q. We use an example of an illustrative ServiceNow platform to discuss technical topics related to AWS services.

Find accurate answers from content in ServiceNow using Amazon Q Business

After you integrate Amazon Q Business with ServiceNow, you can ask questions from the description of the document, such as:

  • How do I troubleshoot an invalid IP configuration on a network router? – This could be derived from an internal knowledge article on that topic
  • Which form do I use to request a new email account? – This could be derived from an internal Service Catalog entry
  • Is there a previous incident on the topic of resetting cloud root user password? – This could be derived from an internal incident entry

Overview of the ServiceNow connector

A data source connector is a mechanism for integrating and synchronizing data from multiple repositories into one container index. Amazon Q Business offers multiple data source connectors that can connect to your data sources and help you create your generative AI solution with minimal configuration.

To crawl and index contents in ServiceNow, we configure Amazon Q Business ServiceNow connector as a data source in your Amazon Q business application.

When you connect Amazon Q Business to a data source and initiate the data synchronization process, Amazon Q Business crawls and adds documents from the data source to its index.

Types of documents

Let’s look at what are considered as documents in the context of Amazon Q Business ServiceNow connector.

The Amazon Q Business ServiceNow connector supports crawling of the following entities in ServiceNow:

  • Knowledge articles – Each article is considered a single document
  • Knowledge article attachments – Each attachment is considered a single document
  • Service Catalog – Each catalog item is considered a single document
  • Service Catalog attachments – Each catalog attachment is considered a single document
  • Incidents – Each incident is considered a single document
  • Incident attachments – Each incident attachment is considered a single document

Although not all metadata is available at the time of writing, you can also configure field mappings. Field mappings allow you to map ServiceNow field names to Amazon Q index field names. This includes both default field mappings created automatically by Amazon Q, as well as custom field mappings that you can create and edit. Refer to ServiceNow data source connector field mappings documentation for more information.

Authentication

The Amazon Q Business ServiceNow connector support two types of authentication methods:

  • Basic authentication – ServiceNow host URL, user name, and password
  • OAuth 2.0 authentication with Resource Owner Password Flow – ServiceNow host URL, user name, password, client ID, and client secret

Supported ServiceNow versions

ServiceNow usually names platform versions after cities for the added convenience of easily differentiating between versions and associated features. At the time of writing, the following versions are natively supported in the Amazon Q Business ServiceNow connector:

  • San Diego
  • Tokyo
  • Rome
  • Vancouver
  • Others

ACL crawling

To maintain a secure environment, Amazon Q Business now requires ACL and identity crawling for all connected data sources. When preparing to connect Amazon Q Business applications to AWS IAM Identity Center, you need to enable ACL indexing and identity crawling and re-synchronize your connector.

Amazon Q Business enforces data security by supporting the crawling of ACLs and identity information from connected data sources. Indexing documents with ACLs is crucial for maintaining data security, because documents without ACLs are considered public.

If you need to index documents without ACLs, make sure they’re explicitly marked as public in your data source. When connecting a ServiceNow data source, Amazon Q Business crawls ACL information, including user and group information, from your ServiceNow instance. With ACL crawling, you can filter chat responses based on the end-user’s document access level, making sure users only see information they’re authorized to access.

In ServiceNow, user IDs are mapped from user emails and exist on files with set access permissions. This mapping allows Amazon Q Business to effectively enforce access controls based on the user’s identity and permissions within the ServiceNow environment.

Refer to How Amazon Q Business connector crawls ServiceNow ACLs for more information.

Overview of solution

Amazon Q is a generative-AI powered assistant that helps customers answer questions, provide summaries, generate content, and complete tasks based on data in their company repository. It also exists as a learning tool for AWS users who want to ask questions about services and best practices in the cloud. You can use the Amazon Q connector for ServiceNow online to crawl your ServiceNow domain and index service tickets, guides, and community posts to discover answers for your questions faster.

Amazon Q understands and respects your existing identities, roles, and permissions and uses this information to personalize its interactions. If a user doesn’t have permission to access data without Amazon Q, they can’t access it using Amazon Q either. The following table outlines which documents each user is authorized to access for our use case. For a complete list of ServiceNow roles, refer to documentation. The documents being used in this example are a subset of AWS public documents from re:Post pre-loaded into ServiceNow with access restriction.

# First Name Last Name Document type authorized for access ServiceNow Roles
1 John Stiles Knowledge Articles, Service Catalog and Incidents knowledge, catalog, incident_manager
2 Mary Major Knowledge Articles and Service Catalog knowledge, catalog
3 Mateo Jackson Incidents incident_manager

In this post, we show how to use the Amazon Q Business ServiceNow connector to index data from your ServiceNow platform for intelligent search.

Prerequisites

For this walkthrough, you should have the following prerequisites:

Configure your ServiceNow connection

In your ServiceNow platform, complete the following steps to create an OAuth2 secret that could be consumed from your Amazon Q application:

  1. In ServiceNow, on the All menu, expand System OAuth and choose Application Registry.

ServiceNow console

  1. Choose New.

ServiceNow System OAuth App Registry

  1. Choose Create an OAuth API endpoint for external clients.

ServiceNow System OAuth App Registry Create Endpoint

  1. For Name, enter a unique name.
  2. Fill out the remaining parameters according to your requirements and choose Submit.

Note down the client ID and client secret to use in later steps.

ServiceNow Create OAuth Token

Create an Amazon Q Business application

Complete the following steps to create an Amazon Q Business application:

  1. On the Amazon Q console, choose Getting started in the navigation pane.
  2. Under Amazon Q Business Pro, choose Q Business to subscribe.

QBusiness Create App

  1. On the Amazon Q Business console, choose Get started.

QBusiness CreateApp2

  1. On the Applications page, choose Create application.

QBusiness CreateApp3

  1. On the Create application page, provide your application details.
  2. Choose Create.

Make sure the Amazon Q Business application is connected to IAM Identity Center. For more information, see Setting up Amazon Q Business with IAM Identity Center as identity provider.

QBusiness CreateApp4

  1. On the Select retriever page, select Use native retriever for your retriever and select Starter for the index provisioning type.
  2. Choose Next.

QBusiness CreateApp5

  1. On the Connect data sources page, choose Next without connecting to any data source (we do that in the next section).

QBusiness CreateApp6

QBusiness CreateApp7

  1. On the Add groups and users page, choose Add groups and users.

QBusiness CreateApp7

  1. Add any groups and users to access the application.

For more details, refer to Adding users and subscriptions to an Amazon Q Business application.

  1. Choose Create application.

QBusiness CreateApp8

Configure the data source using the Amazon Q ServiceNow Online connector

Now let’s configure the ServiceNow Online data source connector with the Amazon Q application that we created in the previous section.

  1. On the Amazon Q console, navigate to the Applications page and choose the application you just created.

Q Business - Connector Config1

  1. In the Data sources section, choose Add data source.

Q Business - Connector Config2

  1. Search for and choose the ServiceNow Online connector.

Q Business - Connector Config3

  1. Provide the name, ServiceNow host, and version information.

If your ServiceNow version isn’t on the dropdown menu, choose Others.

Q Business - Connector Config4

  1. Choose Create and add new secret to create a new secret to connect with the ServiceNow platform account.

Q Business - Connector Config5

  1. Provide the connection information based on the OAuth2 endpoint created in ServiceNow previously, then choose Save.

Q Business - Connector Config6

  1. Leave the defaults for the VPC and Identity crawler
  2. For IAM role, choose Create a new service role (Recommended) and keep the default role name.

Q Business - Connector Config7

  1. Choose entities that you want to bring over from ServiceNow.

This example shows knowledge articles, Service Catalog items, and incidents. The Filter query option helps curate the list of items that you want to bring into Amazon Q. When you use a query, you can specify multiple knowledge bases, including private knowledge bases. For more details on how to build ServiceNow filters, refer to Filters. For additional query building resources, see Specifying documents to index with a query.

Q Business - Connector Config8

Q Business - Connector Config9

Q Business - Connector Config10

  1. For Sync mode, select Full sync.
  2. For Sync run schedule, choose Run on demand.

Q Business - Connector Config11

  1. Leave the remaining options as default and choose Add data source.

Q Business - Connector Config12

  1. When the data source status shows as Active, initiate data synchronization by choosing Sync now.

Q Business - Connector Config12

Wait until the synchronization status changes to Completed before continuing to the next steps.

Q Business Connector Config13

For information about common issues encountered and related troubleshooting steps, refer to Troubleshooting data source connectors.

Run queries with the Amazon Q web experience

Now that the data synchronization is complete, you can start exploring insights from Amazon Q. You have three users for testing— John with admin access, Mary with access to knowledge articles and service catalog, and Mateo with access only to incidents. In the following steps, you will sign in as each user and ask various questions to see what responses Amazon Q provides based on the permitted document types for their respective groups. You will also test edge cases where users try to access information from restricted sources to validate the access control functionality.

  1. On the details page of the new Amazon Q application, navigate to the Web experience settings tab and choose the link under Deployed URL. This will open a new tab with a preview of the UI and options to customize according to your needs.

Q Business - Web Experience1

  1. Log in to the application as John Stiles first, using the credentials for the user that you added to the Amazon Q application.

Q Business - Web Experience2

  1. After the login is successful, choose the application that you just created.

Q Business - Web Application3

  1. From there, you’ll be redirected to the Amazon Q assistant UI, where you can start asking questions using natural language and get insights from your ServiceNow platform.

Q Business - Web Experience4

  1. Let’s run some queries to see how Amazon Q can answer questions related to synchronized data. John has access to all ServiceNow document types. When asked “How do I upgrade my EKS cluster to the latest version”, Amazon Q will provide a summary pulling information from the related knowledge article, highlighting the sources at the end of each excerpt.

QBusiness-ServiceNow-Connector

  1. Still logged in as John, when asked “What is Amazon QLDB?”, Amazon Q will provide a summary pulling information from the related ServiceNow incident.

QBusiness-ServiceNow-Connector

  1. Sign out as user John. Start a new incognito browser session or use a different browser. Copy the web experience URL and sign in as user Mary. Repeat these steps each time you need to sign in as a different user. Mary only has access to knowledge articles and service catalog with no incident access. When asked “How do I perform vector search with Amazon Redshift”, Amazon Q will provide a summary pulling information from the related knowledge article, highlighting the source.

QBusiness-ServiceNow-Connector

  1. However, when asked “What is Amazon QLDB?”, Amazon Q responds that it could not find relevant information. This because Mary does not have access to ServiceNow incidents which is the only place where the answer to that question can be found.

QBusiness-ServiceNow-Connector

  1. Sign out as user Mary. Start a new incognito browser session or use a different browser. Copy the web experience URL and sign in as user Mateo. Mateo only has access to incidents with no knowledge article or service catalog access. When asked “What is Amazon QLDB?”, Amazon Q will provide a summary pulling information from the related incident, highlighting the source.

QBusiness-ServiceNow-Connector

  1. However, when asked “How do I perform vector search with Amazon Redshift?”, Amazon Q responds that it could not find relevant information. This because Mateo does not have access to ServiceNow knowledge article which is the only place where the answer to this question can be found.

QBusiness-ServiceNow-Connector

Try out the assistant with additional queries, such as:

  • How do you set up new blackberry device?
  • How do I set up S3 object replication?
  • How do I resolve empty log issues in CloudWatch?
  • How do I troubleshoot 403 Access Denied errors from Amazon S3?

Frequently asked questions

In this section, we provide guidance to frequently asked questions.

Amazon Q Business is unable to answer your questions

If you get the response “Sorry, I could not find relevant information to complete your request,” this may be due to a few reasons:

  • No permissions – ACLs applied to your account don’t allow you to query certain data sources. If this is the case, reach out to your application administrator to make sure your ACLs are configured to access the data sources.
  • Email ID doesn’t match user ID – In rare scenarios, a user may have a different email ID associated with Amazon Q in IAM Identity Center than what is associated in the ServiceNow user profile. In such cases, make sure the Amazon Q user profile is updated to recognize the ServiceNow email ID through the update-user command in the AWS Command Line Interface (AWS CLI) or the related API call.
  • Data connector sync failed – Your data connector may have failed to sync information from the source to the Amazon Q Business application. Verify the data connector’s sync run schedule and sync history to confirm the sync is successful.
  • Empty or private ServiceNow projects – Private or empty projects aren’t crawled during the sync run.

If none of these reasons apply to your use case, open a support case and work with your technical account manager to get this resolved.

How to generate responses from authoritative data sources

If you want Amazon Q Business to only generate responses from authoritative data sources, you can configure this using the Amazon Q Business application global controls under Admin controls and guardrails.

  1. Log in to the Amazon Q Business console as an Amazon Q Business application administrator.
  2. Navigate to the application and choose Admin controls and guardrails in the navigation pane.
  3. Choose Edit in the Global controls section to set these options.

For more information, refer to Admin controls and guardrails in Amazon Q Business.

Q Business - Troubleshooting

Amazon Q Business responds using old (stale) data even though your data source is updated

Each Amazon Q Business data connector can be configured with a unique sync run schedule frequency. Verifying the sync status and sync schedule frequency for your data connector reveals when the last sync ran successfully. It could be that your data connector’s sync run schedule is either set to sync at a scheduled time of day, week, or month. If it’s set to run on demand, the sync has to be manually invoked. When the sync run is complete, verify the sync history to make sure the run has successfully synced all new issues. Refer to Sync run schedule for more information about each option.

Clean up

To avoid incurring future charges, clean up any resources created as part of this solution. Delete the Amazon Q ServiceNow Online connector data source, OAuth API endpoint created in ServiceNow, and the Q Business application. Also, delete the user management setup in IAM Identity Center.

Conclusion

In this post, we discussed how to configure the Amazon Q ServiceNow Online connector to crawl and index service tickets, community posts, and knowledge guides. We showed how generative AI-based search in Amazon Q enables your business leaders and agents to discover insights from your ServiceNow content quicker. This is all available through a user-friendly interface with Amazon Q Business doing the undifferentiated heavy lifting.

To learn more about the Amazon Q Business connector for ServiceNow Online, refer to Connecting ServiceNow Online to Amazon Q Business.


About the Authors

Prabhakar Chandrasekaran is a Senior Technical Account Manager with AWS Enterprise Support. Prabhakar enjoys helping customers build cutting-edge AI/ML solutions on the cloud. He also works with enterprise customers providing proactive guidance and operational assistance, helping them improve the value of their solutions when using AWS. Prabhakar holds six AWS and seven other professional certifications. With over 20 years of professional experience, Prabhakar was a data engineer and a program leader in the financial services space prior to joining AWS.

Lakshmi Dogiparti is a is a Software Development Engineer at Amazon Web Services. She works on the Amazon Q and Amazon Kendra connector design, development, integration and test operations.

Vijai Gandikota is a Principal Product Manager in the Amazon Q and Amazon Kendra organization of Amazon Web Services. He is responsible for the Amazon Q and Amazon Kendra connectors, ingestion, security, and other aspects of the Amazon Q and Amazon Kendra services.

Read More

Decoding NVIDIA Edify — The Technology That Helps Developers Create Custom Models Trained on Their Data

Decoding NVIDIA Edify — The Technology That Helps Developers Create Custom Models Trained on Their Data

Editor’s note: This post is part of the AI Decoded series, which demystifies AI by making the technology more accessible, and showcases new hardware, software, tools and accelerations for RTX PC users.

Content generators — whether producing language, 2D images, 3D models or videos — are giving the creative community tools that bring visions to life faster.

To help developers build these new generative AI tools, NVIDIA has set up NVIDIA AI Foundry. It helps companies train generative AI models on their own licensed data using NVIDIA Edify, a multimodal AI architecture that can use simple text prompts to generate images, videos, 3D assets, 360-degree high-dynamic-range imaging and physically based rendering (PBR) materials. Using AI Foundry, companies can train bespoke AI models to generate any of these assets.

Key elements of Edify include its ability to generate multiple types of content, its superior training efficiency, which allows it to produce high-quality content while trained on fewer images, and its ability to fine-tune models to style-match or learn characters or objects.

One of the best examples of services built on NVIDIA AI Foundry and Edify is Generative AI by Getty Images, a commercially safe generative photography service. The combination of AI Foundry and Edify allows users to control their training datasets, so they can create models that fit their need.

To avoid copyright issues, Getty Images used Edify to train the service on its own licensed content, ensuring that no famous characters or products are in the dataset. The company also shares part of the profits with the contributors, driving a new revenue stream for creators who contribute to the model.

Asset Generation With Edify 

Edify can be trained to generate a variety of image types, including images, 3D assets and 360-degree HDRi environment maps.

Edify Image can generate four high-quality 1K images in around six seconds, doubling the performance of the previous model. Images can also be converted to 4K with a generative upscaler that adds additional details.

Getty Images 4K image generation trained on NVIDIA Edify using commercially safe creative libraries.

Images are highly controllable thanks to advanced prompt adherence, camera controls to specify focal length or depth of field, and ControlNets to guide the generation. The ControlNets include Sketch, which allows users to provide a sketch to follow or copy the composition of an image, and Depth, to copy the composition of an image.

Images can also be edited with Edify Image. InPaint allows users to add or modify content in an image. Replace — a strict InPaint — can change details such as clothing. And OutPaint can expand an image to match different aspect ratios. And all of this is simplified with Segment, a feature that can mask objects with just a text prompt.

Edify can also create artist-ready 3D meshes. The meshes come with clean quads-based topology, up to 4K PBR materials and automatic UV mapping for easier texture editing. A fast preview mode provides results in as few as 10 seconds, which can then be turned into a full 3D mesh.

Meshes are perfect for prototyping scenes, generating background objects for set decoration or as a head start for 3D sculpting.

Edify 360 HDRi generates environment maps of natural landscapes that can be used to light a scene, for reflections and even as a background. The model can generate up to 16K HDRi images from text or image prompts. With a desired backplate in hand, users can create a custom HDRi to match instead of spending hours looking for one.

High dynamic range, 360-degree panoramas from text prompts.

Edify’s multimodal capability is unique, enabling advanced workflows that combine different asset types. Used together with an agent, for instance, Edify allows users to prototype a full scene in a couple of minutes with a simple text prompt — like in the NVIDIA Research SIGGRAPH demo that showcased the assistive 3D world-building capabilities of NVIDIA Edify-powered models and the NVIDIA Omniverse platform.

Another use case is to combine Edify 3D and 360 HDRi with Image to give users full control of image generation. By generating the scene in 3D, artists can move objects around and frame their desired shot — and then use Edify Image to turn the prototype into a photorealistic image.

Generative AI by Getty Images 

Getty Images is one of the largest content service providers and suppliers of creative visuals, editorial photography, video and music — and is the one of the first places people turn to discover, purchase and share powerful visual content from the world’s best photographers and videographers.

Getty Images used NVIDIA AI Foundry to train an NVIDIA Edify Image model to power its generative AI service. Available through Generative AI by Getty Images for enterprises and Generative AI by iStock for small businesses and amateur creators, the service allows users to generate and modify images using models powered by NVIDIA Edify.

Generative AI by Getty Images (or iStock) offers a variety of licensed content.

Getty Images and iStock recently updated to the latest version of Edify Image, enabling faster generations and higher prompt adherence and exposing Camera Controls.

Updated camera controls in Generative AI by Getty Images.

Users can now also use the generative AI tools on preshot creative content, allowing them to edit and modify iStock’s library of visuals to rapidly iterate and perfect content. Those same capabilities will be soon available on Gettyimages.com.

Test drive Generative AI by Getty Images on ai.nvidia.com.

Generative AI is transforming gaming, videoconferencing and interactive experiences of all kinds. Make sense of what’s new and what’s next by subscribing to the AI Decoded newsletter.

Read More

Intelligent healthcare forms analysis with Amazon Bedrock

Intelligent healthcare forms analysis with Amazon Bedrock

Generative artificial intelligence (AI) provides an opportunity for improvements in healthcare by combining and analyzing structured and unstructured data across previously disconnected silos. Generative AI can help raise the bar on efficiency and effectiveness across the full scope of healthcare delivery.

The healthcare industry generates and collects a significant amount of unstructured textual data, including clinical documentation such as patient information, medical history, and test results, as well as non-clinical documentation like administrative records. This unstructured data can impact the efficiency and productivity of clinical services, because it’s often found in various paper-based forms that can be difficult to manage and process. Streamlining the handling of this information is crucial for healthcare providers to improve patient care and optimize their operations.

Handling large volumes of data, extracting unstructured data from multiple paper forms or images, and comparing it with the standard or reference forms can be a long and arduous process, prone to errors and inefficiencies. However, advancements in generative AI solutions have introduced automated approaches that offer a more efficient and reliable solution for comparing multiple documents.

Amazon Bedrock is a fully managed service that makes foundation models (FMs) from leading AI startups and Amazon available through an API, so you can choose from a wide range of FMs to find the model that is best suited for your use case. Amazon Bedrock offers a serverless experience, so you can get started quickly, privately customize FMs with your own data, and quickly integrate and deploy them into your applications using the AWS tools without having to manage the infrastructure.

In this post, we explore using the Anthropic Claude 3 on Amazon Bedrock large language model (LLM). Amazon Bedrock provides access to several LLMs, such as Anthropic Claude 3, which can be used to generate semi-structured data relevant to the healthcare industry. This can be particularly useful for creating various healthcare-related forms, such as patient intake forms, insurance claim forms, or medical history questionnaires.

Solution overview

To provide a high-level understanding of how the solution works before diving deeper into the specific elements and the services used, we discuss the architectural steps required to build our solution on AWS. We illustrate the key elements of the solution, giving you an overview of the various components and their interactions.

We then examine each of the key elements in more detail, exploring the specific AWS services that are used to build the solution, and discuss how these services work together to achieve the desired functionality. This provides a solid foundation for further exploration and implementation of the solution.

Part 1: Standard forms: Data extraction and storage

The following diagram highlights the key elements of a solution for data extraction and storage with standard forms.

Figure 1: Architecture – Standard Form – Data Extraction & Storage.

The Standard from processing steps are as follows:

  1. A user upload images of paper forms (PDF, PNG, JPEG) to Amazon Simple Storage Service (Amazon S3), a highly scalable and durable object storage service.
  2. Amazon Simple Queue Service (Amazon SQS) is used as the message queue. Whenever a new form is loaded, an event is invoked in Amazon SQS.
    1. If an S3 object is not processed, then after two tries it will be moved to the SQS dead-letter queue (DLQ), which can be configured further with an Amazon Simple Notification Service (Amazon SNS) topic to notify the user through email.
  3. The SQS message invokes an AWS Lambda The Lambda function is responsible for processing the new form data.
  4. The Lambda function reads the new S3 object and passes it to the Amazon Textract API to process the unstructured data and generate a hierarchical, structured output. Amazon Textract is an AWS service that can extract text, handwriting, and data from scanned documents and images. This approach allows for the efficient and scalable processing of complex documents, enabling you to extract valuable insights and data from various sources.
  5. The Lambda function passes the converted text to Anthropic Claude 3 on Amazon Bedrock Anthropic Claude 3 to generate a list of questions.
  6. Lastly, the Lambda function stores the question list in Amazon S3.

Amazon Bedrock API call to extract form details

We call an Amazon Bedrock API twice in the process for the following actions:

  • Extract questions from the standard or reference form – The first API call is made to extract a list of questions and sub-questions from the standard or reference form. This list serves as a baseline or reference point for comparison with other forms. By extracting the questions from the reference form, we can establish a benchmark against which other forms can be evaluated.
  • Extract questions from the custom form – The second API call is made to extract a list of questions and sub-questions from the custom form or the form that needs to be compared against the standard or reference form. This step is necessary because we need to analyze the custom form’s content and structure to identify its questions and sub-questions before we can compare them with the reference form.

By having the questions extracted and structured separately for both the reference and custom forms, the solution can then pass these two lists to the Amazon Bedrock API for the final comparison step. This approach maintains the following:

  • Accurate comparison – The API has access to the structured data from both forms, making it straightforward to identify matches, mismatches, and provide relevant reasoning
  • Efficient processing – Separating the extraction process for the reference and custom forms helps avoid redundant operations and optimizes the overall workflow
  • Observability and interoperability – Keeping the questions separate enables better visibility, analysis, and integration of the questions from different forms
  • Hallucination avoidance – By following a structured approach and relying on the extracted data, the solution helps avoid generating or hallucinating content, providing integrity in the comparison process

This two-step approach uses the capabilities of the Amazon Bedrock API while optimizing the workflow, enabling accurate and efficient form comparison, and promoting observability and interoperability of the questions involved.

See the following code (API Call):

def get_response_from_claude3(context, prompt_data):
    body = json.dumps({
        "anthropic_version": "bedrock-2023-05-31",
        "max_tokens": 4096,
        "system":"""You are an expert form analyzer and can understand different sections and subsections within a form and can find all the questions  being asked. You can find similarities and differences at the question level between different types of forms.""",
        "messages": [
            {
                "role": "user",
                "content": [
                    {"type": "text", 
                     "text": f"""Given the following document(s): {context} n {prompt_data}"""},
                ],
            }
        ],
    })
    modelId = f'anthropic.claude-3-sonnet-20240229-v1:0'     
    config = Config(read_timeout=1000)
    bedrock = boto3.client('bedrock-runtime',config=config)    
    response = bedrock.invoke_model(body=body, modelId=modelId)
    response_body = json.loads(response.get("body").read())
    answer = response_body.get("content")[0].get("text")
   return answer

User prompt to extract fields and list them

We provide the following user prompt to Anthropic Claude 3 to extract the fields from the raw text and list them for comparison as shown in step 3B (of Figure 3: Data Extraction & Form Field comparison).

get_response_from_claude3(response, f""" Create a summary of the different sections in the form, then
                                         for each section create a list of all questions and sub questions asked in the
                                         whole form and group by section including signature, date, reviews and approvals. 
                                         Then concatenate all questions and return a single numbered list, Be very detailed."""))

The following figure illustrates the output from Amazon Bedrock with a list of questions from the standard or reference form.

Figure 2:  Standard Form Sample Question List

Store this question list in Amazon S3 so it can be used for comparison with other forms, as shown in Part 2 of the process below.

Part 2: Data extraction and form field comparison

The following diagram illustrates the architecture for the next step, which is data extraction and form field comparison.

Figure 3: Data Extraction & Form Field comparison

Steps 1 and 2 are similar to those in Figure 1, but are repeated for the forms to be compared against the standard or reference forms. The next steps are as follows:

  1. The SQS message invokes a Lambda function. The Lambda function is responsible for processing the new form data.
    1. The raw text is extracted by Amazon Textract using a Lambda function. The extracted raw text is then passed to Step 3B for further processing and analysis.
    2. Anthropic Claude 3 generates a list of questions from the custom form that needs to be compared with the standard from. Then both forms and document question lists are passed to Amazon Bedrock, which compares the extracted raw text with standard or reference raw text to identify differences and anomalies to provide insights and recommendations relevant to the healthcare industry by respective category. It then generates the final output in JSON format for further processing and dashboarding. The Amazon Bedrock API call and user prompt from Step 5 (Figure 1: Architecture – Standard Form – Data Extraction & Storage) are reused for this step to generate a question list from the custom form.

We discuss Steps 4–6 in the next section.

The following screenshot shows the output from Amazon Bedrock with a list of questions from the custom form.

Figure 4:  Custom Form Sample Question List

Final comparison using Anthropic Claude 3 on Amazon Bedrock:

The following examples show the results from the comparison exercise using Amazon Bedrock with Anthropic Claude 3, showing one that matched and one that didn’t match with the reference or standard form.

The following is the user prompt for forms comparison:

categories = ['Personal Information','Work History','Medical History','Medications and Allergies','Additional Questions','Physical Examination','Job Description','Examination Results']
forms = f"Form 1 : {reference_form_question_list}, Form 2 : {custom_form_question_list}"

The following is the first call:

match_result = (get_response_from_claude3(forms, f""" Go through questions and sub questions {start}- {processed} in Form 2 return the question whether it matches with any question /sub question/field in Form 1 in terms of meaning and context and provide reasoning, or if it does not match with any question/sub question/field in Form 1 and provide reasoning. Treat each sub question as its own question and the final output should be a numbered list with the same length as the number of questions and sub questions in Form 2. Be concise"""))

The following is the second call:

get_response_from_claude3(match_result, 
f""" Go through all the questions and sub questions in the Form 2 Results and turn this into a JSON object called 'All Questions' which has the keys 'Question' with only the matched or unmatched question, 'Match' with valid values of yes or no, and 'Reason' which is the reason of match or no match, ‘Category' placing the question in one the categories in this list: {categories} . Do not omit any questions in output."""))

The following screenshot shows the questions matched with the reference form.

The following screenshot shows the questions that didn’t match with the reference form.

The steps from the preceding architecture diagram continue as follows:

4. The SQS queue invokes a Lambda function.

5. The Lambda function invokes an AWS Glue job and monitors for completion.

a. The AWS Glue job processes the final JSON output from the Amazon Bedrock model in tabular format for reporting.

6. Amazon QuickSight is used to create interactive dashboards and visualizations, allowing healthcare professionals to explore the analysis, identify trends, and make informed decisions based on the insights provided by Anthropic Claude 3.

The following screenshot shows a sample QuickSight dashboard.

       

Next steps

Many healthcare providers are investing in digital technology, such as electronic health records (EHRs) and electronic medical records (EMRs) to streamline data collection and storage, allowing appropriate staff to access records for patient care. Additionally, digitized health records provide the convenience of electronic forms and remote data editing for patients. Electronic health records offer a more secure and accessible record system, reducing data loss and facilitating data accuracy. Similar solutions can offer capturing the data in these paper forms into EHRs.

Conclusion

Generative AI solutions like Amazon Bedrock with Anthropic Claude 3 can significantly streamline the process of extracting and comparing unstructured data from paper forms or images. By automating the extraction of form fields and questions, and intelligently comparing them against standard or reference forms, this solution offers a more efficient and accurate approach to handling large volumes of data. The integration of AWS services like Lambda, Amazon S3, Amazon SQS, and QuickSight provides a scalable and robust architecture for deploying this solution. As healthcare organizations continue to digitize their operations, such AI-powered solutions can play a crucial role in improving data management, maintaining compliance, and ultimately enhancing patient care through better insights and decision-making.


About the Authors

Satish Sarapuri is a Sr. Data Architect, Data Lake at AWS. He helps enterprise-level customers build high-performance, highly available, cost-effective, resilient, and secure generative AI, data mesh, data lake, and analytics platform solutions on AWS, through which customers can make data-driven decisions to gain impactful outcomes for their business and help them on their digital and data transformation journey. In his spare time, he enjoys spending time with his family and playing tennis.

Harpreet Cheema is a Machine Learning Engineer at the AWS Generative AI Innovation Center. He is very passionate in the field of machine learning and in tackling data-oriented problems. In his role, he focuses on developing and delivering machine learning focused solutions for customers across different domains.

Deborah Devadason is a Senior Advisory Consultant in the Professional Service team at Amazon Web Services. She is a results-driven and passionate Data Strategy specialist with over 25 years of consulting experience across the globe in multiple industries. She leverages her expertise to solve complex problems and accelerate business-focused journeys, thereby creating a stronger backbone for the digital and data transformation journey.

Read More