Generating user intent from a sequence of user interface (UI) actions is a core challenge in comprehensive UI understanding. Recent advancements in multimodal large language models (MLLMs) have led to substantial progress in this area, but their demands for extensive model parameters, computing power, and high latency makes them impractical for scenarios requiring lightweight, on-device solutions with low latency or heightened privacy. Additionally, the lack of high-quality datasets has hindered the development of such lightweight models. To address these challenges, we propose UI-JEPA, a…Apple Machine Learning Research
PyTorch Shanghai Meetup Notes
Summary
We are honored to successfully host the PyTorch Shanghai Meetup on August 15, 2024. This Meetup has received great attention from the industry. We invited senior PyTorch developers from Intel and Huawei as guest speakers, who shared their valuable experience and the latest technical trends. In addition, this event also attracted PyTorch enthusiasts from many technology companies and well-known universities. A total of more than 40 participants gathered together to discuss and exchange the latest applications and technological advances of PyTorch.
This Meetup not only strengthened the connection between PyTorch community members, but also provided a platform for local AI technology enthusiasts to learn, communicate and grow. We look forward to the next gathering to continue to promote the development of PyTorch technology in the local area.
1. PyTorch Foundation Updates
PyTorch Board member Fred Li shared the latest updates in the PyTorch community, He reviewed the development history of the PyTorch community, explained in detail the growth path of community developers, encouraged everyone to delve deeper into technology, and introduced the upcoming PyTorch Conference 2024 related matters.
2. Intel’s Journey with PyTorch Democratizing AI with ubiquitous hardware and open software
PyTorch CPU module maintainer Jiong Gong shared 6-year technical contributions from Intel to PyTorch and its ecosystem, explored the remarkable advancements that Intel has made in both software and hardware democratizing AI, ensuring accessibility, and optimizing performance across a diverse range of Intel hardware platforms.
3. Exploring Multi-Backend Support in PyTorch Ecosystem: A Case Study of Ascend
Fengchun Hua, a PyTorch contributor from Huawei, took Huawei Ascend NPU as an example to demonstrate the latest achievements in multi-backend support for PyTorch applications. He introduced the hardware features of Huawei Ascend NPU and the infrastructure of CANN (Compute Architecture for Neural Networks), and explained the key achievements and innovations in native support work. He also shared the current challenges and the next work plan.
Yuanhao Ji, another PyTorch contributor from Huawei, then introduced the Autoload Device Extension proposal, explained its implementation details and value in improving the scalability of PyTorch, and introduced the latest work progress of the PyTorch Chinese community.
4. Intel XPU Backend for Inductor
Eikan is a PyTorch contributor from Intel. He focuses on torch.compile stack for both Intel CPU and GPU. In this session, Eikan presented Intel’s efforts on torch.compile for Intel GPUs. He provided updates on the current status of Intel GPUs within PyTorch, covering both functionality and performance aspects. Additionally, Eikan used Intel GPU as a case study to demonstrate how to integrate a new backend into the Inductor using Triton.
5. PyTorch PrivateUse1 Evolution Approaches and Insights
Jiawei Li, a PyTorch collaborator from Huawei, introduced PyTorch’s Dispatch mechanism and emphasized the limitations of DIspatchKey. He took Huawei Ascend NPU as an example to share the best practices of the PyTorch PrivateUse1 mechanism. He mentioned that while using the PrivateUse1 mechanism, Huawei also submitted many improvements and bug fixes for the mechanism to the PyTorch community. He also mentioned that due to the lack of upstream CI support for out-of-tree devices, changes in upstream code may affect their stability and quality, and this insight was recognized by everyone.
How Vidmob is using generative AI to transform its creative data landscape
This post was co-written with Mickey Alon from Vidmob.
Generative artificial intelligence (AI) can be vital for marketing because it enables the creation of personalized content and optimizes ad targeting with predictive analytics. Specifically, such data analysis can result in predicting trends and public sentiment while also personalizing customer journeys, ultimately leading to more effective marketing and driving business. For example, insights from creative data (advertising analytics) using campaign performance can not only uncover which creative works best but also help you understand the reasons behind its success.
In this post, we illustrate how Vidmob, a creative data company, worked with the AWS Generative AI Innovation Center (GenAIIC) team to uncover meaningful insights at scale within creative data using Amazon Bedrock. The collaboration involved the following steps:
- Use natural language to analyze and generate insights on performance data through different channels (such as TikTok, Meta, and Pinterest)
- Generate research information for context such as the value proposition, competitive differentiators, and brand identity of a specific client
Vidmob background
Vidmob is the Creative Data company that uses creative analytics and scoring software to make creative and media decisions for marketers and agencies as they strive to drive business results through improved creative effectiveness. Vidmob’s influence lies in its partnerships and native integrations across the digital ad landscape, its dozens of proprietary models, and operating a reinforcement learning with human feedback (RLHF) model for creativity.
Vidmob’s AI journey
Vidmob uses AI to not only enhance its creative data capabilities, but also pioneer advancements in the field of RLHF for creativity. By seamlessly integrating AI models such as Amazon Rekognition into its innovative stack, Vidmob has continually evolved to stay at the forefront of the creative data landscape.
This journey extends beyond the mere adoption of AI; Vidmob has consistently recognized the importance of curating a differentiated dataset to maximize the potential of its AI-driven solutions. Understanding the intrinsic value of data network effects, Vidmob constructed a product and operational system architecture designed to be the industry’s most comprehensive RLHF solution for marketing creatives.
Use case overview
Vidmob aims to revolutionize its analytics landscape with generative AI. The central goal is to empower customers to directly query and analyze their creative performance data through a chat interface. Over the past 8 years, Vidmob has amassed a wealth of data that provides deep insights into the value of creatives in ad campaigns and strategies for enhancing performance. Vidmob envisions making it effortless for customers to utilize this data to generate insights and make informed decisions about their creative strategies.
Currently, Vidmob and its customers rely on creative strategists to address these questions at the brand level, complemented by machine-generated normative insights at the industry or environment level. This process can take creative strategists many hours. To enhance the customer experience, Vidmob decided to partner with AWS GenAIIC to deliver these insights more quickly and automatically.
Vidmob partnered with AWS GenAIIC to analyze ad data to help Vidmob creative strategists understand the performance of customer ads. Vidmob’s ad data consists of tags created from Amazon Rekognition and other internal models. The chatbot built by AWS GenAIIC would take in this tag data and retrieve insights.
The following were key success criteria for the collaboration:
- Analyze and generate insights in a natural language based on performance data and other metadata
- Generate client company information to be used as initial research for a creative
- Create a scalable solution using Amazon Bedrock that can be integrated with Vidmob’s performance data
However, there were a few challenges in achieving these goals:
- Large language models (LLMs) are limited in the volume of data they can analyze to generate insights without hallucination. They are designed to predict and summarize text-based information and are less optimized for computing creative data at a terabyte scale.
- LLMs don’t have straightforward automatic evaluation techniques. Therefore, human evaluation was required for insights generated by the LLM.
- There are 50–100 creative questions that creative strategists would normally analyze, which means an asynchronous mechanism was needed that would queue up these prompts, aggregate them, and provide the top-most meaningful insights.
Solution overview
The AWS team worked with Vidmob to build a serverless architecture for handling incoming questions from customers. They used the following services in the solution:
The following diagram illustrates the high-level workflow of the current solution:
The workflow consists of the following steps:
- The user navigates to Vidmob and asks a creative-related query.
- Dynamo DB stores the query and the session ID, which is then passed to a Lambda function as a DynamoDB event notification.
- The Lambda function calls Amazon Bedrock, obtains an output from the user query, and sends it back to the Streamlit application for the user to view.
- The Lambda function updates the status after it receives the completed output from Amazon Bedrock.
- In the following sections, we explore the details of the workflow, the dataset, and the results Vidmob achieved.
Workflow details
After the user inputs a query, a prompt is automatically created and then fed into a QA chatbot in which a response is outputted. The main aspects of the LLM prompt include:
- Client description – Background information about the client. This includes the value proposition, brand identity, and competitive differentiators, which is generated by Anthropic Claude v2 on Amazon Bedrock.
- Aperture – Important aspects to take into account for a user question. For example, for all questions relating to branding, “What is the best way to incorporate branding for my meta creative” might identify elements that include a logo, tagline, and sincere tone.
- Context – The filtered dataset of ad performance referenced by the QA bot.
- Question – The user query.
The following screenshot shows the UI where the user can input the client and their ad-related question.
On the backend, a router is used to determine the context (ad-related dataset) as a reference to answer the question. This depends on the question and the client, which is done in the following steps:
- Determine whether the question should reference the objective dataset (general for an entire channel like TikTok, Meta, Pinterest) or placement dataset (specific sub-channels like Facebook Reels). For example, “What is the best way to incorporate branding in my Meta creative” is objective-based, whereas “What is the best way to incorporate branding for Facebook News Feed” is placement-based because it references a specific part of the Meta creative.
- Obtain the corresponding objective dataset for the client if the query is objective-based. If it’s placement-based, first filter the placement dataset to only columns that are relevant to the query and then pass in the resulting dataset.
- Pass the completed prompt to the Anthropic’s Claude v2 model on Amazon Bedrock and display the outputs.
The outputs are displayed as shown in the following screenshot.
Specifically, the outputs include the elements that best answer the question, why this element may be important, and its corresponding percent lift for the creative.
Dataset
The dataset includes a set of ad-related data corresponding to a specific client. Specifically, Vidmob analyzes the client ad campaigns and extracts information related to the ads using various machine learning (ML) models and AWS services. The information about each campaign is collated into a single dataset (creative data). It notes how each element of a given creative performs under a certain metric; for example, how the CTA affects the view-through rate of the ad. The following two datasets were utilized:
- Creative strategist filtered performance data for each question – The dataset given was filtered by Vidmob creative strategists for their analysis. The filtered datasets include an element (such as logo or bright colors for a creative) as well as its corresponding average, percent lift (of a particular metric such as view-through rate), creative count, and impressions for each sub-channel (Facebook Explore, Reels, and so on).
- Unfiltered raw datasets – This dataset included objective-based and placement-based data for each client.
As we discussed earlier, there are two types of datasets for a particular client: objective-based and placement-based data. Objective data is used for answering generic user queries about ads for channels such as TikTok, Meta, or Pinterest, whereas placement data is used for answering specific questions about ads for sub-channels within Meta such as Facebook Reels, Instream, and News Feed. Therefore, questions such as “What are creative insights in my Meta creative” are more general and therefore reference the objective data, and questions such as “What are insights for Facebook News Feed” reference the News Feed statistics in the placement data.
The objective dataset includes elements and their corresponding average percent lift, creative count, p-values, and many more for an entire channel, whereas placement data includes these same statistics for each sub-channel.
Results
A set of questions were evaluated by the strategists for Vidmob, primarily for the following metrics:
- Accuracy – How correct the overall answer is with what you expect to be
- Relevancy – How relevant the LLM-generated output to the question is (or in this case, the background information for the client)
- Clarity – How clear and understandable the outputs from the performance data and their insights are, or if the LLM is making up things
The client background information for the prompt and a set of questions for the filtered and unfiltered data were evaluated.
Overall, the client background, generated by Anthropic’s Claude, outputted the value proposition, brand identity, and competitive differentiator for a given client. The accuracy and clarity were perfect, whereas relevancy was perfect for most samples. Perfect is determined as being given a 9/10 or 10/10 on the specific metrics by subject matter experts.
When answering a set of questions, the responses generally had high clarity and AWS GenAIIC was able to incrementally improve the QA chatbot’s accuracy and relevancy by adding extra tag information to filter the data by 10% and 5%, respectively. Overall, Vidmob expects a reduction in generating insights for creative campaigns from hours to minutes.
Conclusion
In this post, we shared how the AWS GenAIIC team used Anthropic’s Claude on Amazon Bedrock to extract and summarize insights from Vidmob’s performance data using zero-shot prompt engineering. With these services, creative strategists were able to understand client information through inherent knowledge of the LLM as well as answer user queries through added client background information and tag types such as messaging and branding. Such insights can be retrieved at scale and utilized for enhancing effective ad campaigns.
The success of this engagement allowed Vidmob an opportunity to use generative AI to create more valuable insights for customers in reduced time, allowing for a more scalable solution.
This is just one of the ways AWS enables builders to deliver generative AI-based solutions. You can get started with Amazon Bedrock and see how it can be integrated in example code bases today. If you’re interested in working with the AWS Generative AI Innovation Center, reach out to AWS GenAIIC.
About the Authors
Mickey Alon is a serial entrepreneur and co-author of ‘Mastering Product-Led Growth.’ He co-founded Gainsight PX (Vista) and Insightera (Adobe), a real-time personalization engine. He previously led the global product development team at Marketo (Adobe) and currently serves as the CPTO at Vidmob, a leading creative intelligence platform powered by GenAI.
Suren Gunturu is a Data Scientist working in the Generative AI Innovation Center, where he works with various AWS customers to solve high-value business problems. He specializes in building ML pipelines using Large Language Models, primarily through Amazon Bedrock and other AWS Cloud services.
Gaurav Rele is a Senior Data Scientist at the Generative AI Innovation Center, where he works with AWS customers across different verticals to accelerate their use of generative AI and AWS Cloud services to solve their business challenges.
Vidya Sagar Ravipati is a Science Manager at the Generative AI Innovation Center, where he leverages his vast experience in large-scale distributed systems and his passion for machine learning to help AWS customers across different industry verticals accelerate their AI and cloud adoption.
Fine-tune Llama 3 for text generation on Amazon SageMaker JumpStart
Generative artificial intelligence (AI) models have become increasingly popular and powerful, enabling a wide range of applications such as text generation, summarization, question answering, and code generation. However, despite their impressive capabilities, these models often struggle with domain-specific tasks or use cases due to their general training data. To address this challenge, fine-tuning these models on specific data is crucial for achieving optimal performance in specialized domains.
In this post, we demonstrate how to fine-tune the recently released Llama 3 models from Meta, specifically the llama-3-8b and llama-3-70b variants, using Amazon SageMaker JumpStart. The fine-tuning process is based on the scripts provided in the llama-recipes repo from Meta, utilizing techniques like PyTorch FSDP, PEFT/LoRA, and Int8 quantization for efficient fine-tuning of these large models on domain-specific datasets.
By fine-tuning the Meta Llama 3 models with SageMaker JumpStart, you can harness their improved reasoning, code generation, and instruction following capabilities tailored to your specific use cases.
Meta Llama 3 overview
Meta Llama 3 comes in two parameter sizes—8B and 70B with 8,000 context length—that can support a broad range of use cases with improvements in reasoning, code generation, and instruction following. Meta Llama 3 uses a decoder-only transformer architecture and new tokenizer that provides improved model performance with 128,000 context size. In addition, Meta improved post-training procedures that substantially reduced false refusal rates, improved alignment, and increased diversity in model responses. You can now derive the combined advantages of Meta Llama 3 performance and MLOps controls with Amazon SageMaker features such as Amazon SageMaker Pipelines and Amazon SageMaker Debugger. In addition, the model will be deployed in an AWS secure environment under your virtual private cloud (VPC) controls, helping provide data security.
SageMaker JumpStart
SageMaker JumpStart is a powerful feature within the SageMaker machine learning (ML) environment that provides ML practitioners a comprehensive hub of publicly available and proprietary foundation models (FMs). With this managed service, ML practitioners get access to a growing list of cutting-edge models from leading model hubs and providers that they can deploy to dedicated SageMaker instances within a network isolated environment, and customize models using SageMaker for model training and deployment.
Prerequisites
To try out this solution using SageMaker JumpStart, you’ll need the following prerequisites:
- An AWS account that will contain all of your AWS resources.
- An AWS Identity and Access Management (IAM) role to access SageMaker. To learn more about how IAM works with SageMaker, refer to Identity and Access Management for Amazon SageMaker.
- Access to Amazon SageMaker Studio or a SageMaker notebook instance, or an interactive development environment (IDE) such as PyCharm or Visual Studio Code. We recommend using SageMaker Studio for straightforward deployment and inference.
Fine-tune Meta Llama 3 models
In this section, we discuss the steps to fine-tune Meta Llama 3 models. We’ll cover two approaches: using the SageMaker Studio UI for a no-code solution, and utilizing the SageMaker Python SDK.
No-code fine-tuning through the SageMaker Studio UI
SageMaker JumpStart provides access to publicly available and proprietary foundation models from third-party and proprietary providers. Data scientists and developers can quickly prototype and experiment with various ML use cases, accelerating the development and deployment of ML applications. It helps reduce the time and effort required to build ML models from scratch, allowing teams to focus on fine-tuning and customizing the models for their specific use cases. These models are released under different licenses designated by their respective sources. It’s essential to review and adhere to the applicable license terms before downloading or using these models to make sure they’re suitable for your intended use case.
You can access the Meta Llama 3 FMs through SageMaker JumpStart in the SageMaker Studio UI and the SageMaker Python SDK. In this section, we cover how to discover these models in SageMaker Studio.
SageMaker Studio is an IDE that offers a web-based visual interface for performing the ML development steps, from data preparation to model building, training, and deployment. For instructions on getting started and setting up SageMaker Studio, refer to Amazon SageMaker Studio.
When you’re in SageMaker Studio, you can access SageMaker JumpStart by choosing JumpStart in the navigation pane.
In the JumpStart view, you’re presented with the list of public models offered by SageMaker. You can explore other models from other providers in this view. To start using the Meta Llama 3 models, under Providers, choose Meta.
You’re presented with a list of the models available. Choose the Meta-Llama-3-8B-Instruct model.
Here you can view the model details, as well as train, deploy, optimize, and evaluate the model. For this demonstration, we choose Train.
On this page, you can point to the Amazon Simple Storage Service (Amazon S3) bucket containing the training and validation datasets for fine-tuning. In addition, you can configure deployment configuration, hyperparameters, and security settings for fine-tuning. Choose Submit to start the training job on a SageMaker ML instance.
Deploy the model
After the model is fine-tuned, you can deploy it using the model page on SageMaker JumpStart. The option to deploy the fine-tuned model will appear when fine-tuning is finished, as shown in the following screenshot.
You can also deploy the model from this view. You can configure endpoint settings such as the instance type, number of instances, and endpoint name. You will need to accept the End User License Agreement (EULA) before you can deploy the model.
Fine-tune using the SageMaker Python SDK
You can also fine-tune Meta Llama 3 models using the SageMaker Python SDK. A sample notebook with the full instructions can be found on GitHub. The following code example demonstrates how to fine-tune the Meta Llama 3 8B model:
The code sets up a SageMaker JumpStart estimator for fine-tuning the Meta Llama 3 large language model (LLM) on a custom training dataset. It configures the estimator with the desired model ID, accepts the EULA, enables instruction tuning by setting instruction_tuned="True"
, sets the number of training epochs, and initiates the fine-tuning process.
When the fine-tuning job is complete, you can deploy the fine-tuned model directly from the estimator, as shown in the following code. As part of the deploy settings, you can define the instance type you want to deploy the model on. For the full list of deployment parameters, refer to the deploy parameters in the SageMaker SDK documentation.
After the endpoint is up and running, you can perform an inference request against it using the predictor object as follows:
For the full list of predictor parameters, refer to the predictor object in the SageMaker SDK documentation.
Fine-tuning technique
Language models such as Meta Llama are more than 10 GB or even 100 GB in size. Fine-tuning such large models requires instances with significantly higher CUDA memory. Furthermore, training these models can be very slow due to their size. Therefore, for efficient fine-tuning, we use the following optimizations:
- Low-Rank Adaptation (LoRA) – This is a type of parameter efficient fine-tuning (PEFT) for efficient fine-tuning of large models. In this, we freeze the whole model and only add a small set of adjustable parameters or layers into the model. For instance, instead of training all 8 billion parameters for Llama 3 8B, we can fine-tune less than 1% of the parameters. This helps significantly reduce the memory requirement because we only need to store gradients, optimizer states, and other training-related information for only 1% of the parameters. Furthermore, this helps reduce both training time and cost. For more details on this method, refer to LoRA: Low-Rank Adaptation of Large Language Models.
- Int8 quantization – Even with optimizations such as LoRA, models like Meta Llama 70B require significant computational resources for training. To reduce the memory footprint during training, we can employ Int8 quantization. Quantization typically reduces the precision of the floating-point data types. Although this decreases the memory required to store model weights, it can potentially degrade the performance due to loss of information. However, Int8 quantization utilizes only a quarter of the precision compared to full-precision training, but it doesn’t incur significant degradation in performance. Instead of simply dropping bits, Int8 quantization rounds the data from one type to another, preserving the essential information while optimizing memory usage. To learn about Int8 quantization, refer to int8(): 8-bit Matrix Multiplication for Transformers at Scale.
- Fully Sharded Data Parallel (FSDP) – This is a type of data parallel training algorithm that shards the model’s parameters across data parallel workers and can optionally offload part of the training computation to the CPUs. Although the parameters are sharded across different GPUs, computation of each microbatch is local to the GPU worker. It shards parameters more uniformly and achieves optimized performance through communication and computation overlapping during training.
The following table compares different methods with the two Meta Llama 3 models.
Default Instance Type | Supported Instance Types with Default configuration | Default Setting | LORA + FSDP | LORA + No FSDP | Int8 Quantization + LORA + No FSDP | |
Llama 3 8B | ml.g5.12xlarge | ml.g5.12xlarge, ml.g5.24xlarge, ml.g5.48xlarge | LORA + FSDP | Yes | Yes | Yes |
Llama 3 70B | ml.g5.48xlarge | ml.g5.48xlarge | INT8 + LORA + NO FSDP | No | No | Yes |
Fine-tuning of Meta Llama models is based on scripts provided by the GitHub repo.
Training dataset format
SageMaker JumpStart currently support datasets in both domain adaptation format and instruction tuning format. In this section, we specify an example dataset in both formats. For more details, refer to the Dataset formatting section in the appendix.
Domain adaptation format
The Meta Llama 3 text generation model can be fine-tuned on domain-specific datasets, enabling it to generate relevant text and tackle various natural language processing (NLP) tasks within a particular domain using few-shot prompting. This fine-tuning process involves providing the model with a dataset specific to the target domain. The dataset can be in various formats, such as CSV, JSON, or TXT files. For example, if you want to fine-tune the model for the domain of financial reports and filings, you could provide it with a text file containing SEC filings from a company like Amazon. The following is an excerpt from such a filing:
Instruction tuning format
In instruction fine-tuning, the model is fine-tuned for a set of NLP tasks described using instructions. This helps improve the model’s performance for unseen tasks with zero-shot prompts. In instruction tuning dataset format, you specify the template.json
file describing the input and the output formats and the train.jsonl
file with the training data item in each line.
The template.json
file always has the following JSON format:
For instance, the following table shows the template.json
and train.jsonl
files for the Dolly and Dialogsum datasets.
Dataset | Use Case | template.json | train.jsonl |
Dolly | Question Answering | { “prompt”: “Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:n{instruction}nn### Input:n{context}nn”, “completion”: ” {response}” } |
{ “instruction”: “Who painted the Two Monkeys”, “context”: “Two Monkeys or Two Chained Monkeys is a 1562 painting by Dutch and Flemish Renaissance artist Pieter Bruegel the Elder. The work is now in the Gemäldegalerie (Painting Gallery) of the Berlin State Museums.”, “response”: “The two Monkeys or Two Chained Monkeys is a 1562 painting by Dutch and Flemish Renaissance artist Pieter Bruegel the Elder. The work is now in the Gemaeldegalerie (Painting Gallery) of the Berlin State Museums.” } |
Dialogsum | Text Summarization | { “prompt”: “Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n{dialogue}nn”, “completion”: ” {summary}” } |
{ “dialogue”: “#Person1#: Where do these flower vases come from? n#Person2#: They are made a town nearby. The flower vases are made of porcelain and covered with tiny bamboo sticks. n#Person1#: Are they breakable? n#Person2#: No. They are not only ornmamental, but also useful. n#Person1#: No wonder it’s so expensive. “, “summary”: “#Person2# explains the flower vases’ materials and advantages and #Person1# understands why they’re expensive.” } |
Supported hyperparameters for training
The fine-tuning process for Meta Llama 3 models allows you to customize various hyperparameters, each of which can influence factors such as memory consumption, training speed, and the performance of the fine-tuned model. At the time of writing this post, the following are the default hyperparameter values. For the most up-to-date information, refer to the SageMaker Studio console, because these values may be subject to change.
- epoch – The number of passes that the fine-tuning algorithm takes through the training dataset. Must be an integer greater than 1. Default is 5.
- learning_rate – The rate at which the model weights are updated after working through each batch of training examples. Must be a positive float greater than 0. Default is 0.0001.
- lora_r – Lora R dimension. Must be a positive integer. Default is 8.
- lora_alpha – Lora Alpha. Must be a positive integer. Default is 32.
- target_modules – Target modules for LoRA fine-tuning. You can specify a subset of [‘q_proj’,’v_proj’,’k_proj’,’o_proj’,’gate_proj’,’up_proj’,’down_proj’] modules as a string separated by a comma without any spaces. Default is
q_proj,v_proj
. - lora_dropout – Lora Dropout. Must be a positive float between 0 and 1. Default is 0.05.
- instruction_tuned – Whether to instruction-train the model or not. At most one of
instruction_tuned
andchat_dataset
can beTrue
. Must beTrue
orFalse
. Default isFalse
. - chat_dataset – If
True
, dataset is assumed to be in chat format. At most one ofinstruction_tuned
andchat_dataset
can beTrue
. Default isFalse
. - add_input_output_demarcation_key – For an instruction tuned dataset, if this is
True
, a demarcation key ("### Response:n"
) is added between the prompt and completion before training. Default isTrue
. - per_device_train_batch_size – The batch size per GPU core/CPU for training. Default is
1
. - per_device_eval_batch_size – The batch size per GPU core/CPU for evaluation. Default is
1
. - max_train_samples – For debugging purposes or quicker training, truncate the number of training examples to this value. Value
-1
means using all of the training samples. Must be a positive integer or-1
. Default is-1
. - max_val_samples – For debugging purposes or quicker training, truncate the number of validation examples to this value. Value
-1
means using all of the validation samples. Must be a positive integer or-1
. Default is-1
. - seed – Random seed that will be set at the beginning of training. Default is
10
. - max_input_length – Maximum total input sequence length after tokenization. Sequences longer than this will be truncated. If
-1
,max_input_length
is set to the minimum of 1024 and the maximum model length defined by the tokenizer. If set to a positive value,max_input_length
is set to the minimum of the provided value and themodel_max_length
defined by the tokenizer. Must be a positive integer or-1
. Default is-1
. - validation_split_ratio – If validation channel is
None
, ratio of train-validation split from the train data must be between 0–1. Default is0.2
. - train_data_split_seed – If validation data is not present, this fixes the random splitting of the input training data to training and validation data used by the algorithm. Must be an integer. Default is
0
. - preprocessing_num_workers – The number of processes to use for preprocessing. If
None
, the main process is used for preprocessing. Default isNone
. - int8_quantization – If
True
, the model is loaded with 8-bit precision for training. Default for 8B isFalse
. Default for 70B isTrue
. - enable_fsdp – If
True
, training uses FSDP. Default for 8B isTrue
. Default for 70B isFalse
.
Instance types and compatible hyperparameters
The memory requirement during fine-tuning may vary based on several factors:
- Model type – The 8B model has the smallest GPU memory requirement and the 70B model has a largest memory requirement
- Max input length – A higher value of input length leads to processing more tokens at a time and as such requires more CUDA memory
- Batch size – A larger batch size requires larger CUDA memory and therefore requires larger instance types
- Int8 quantization – If using Int8 quantization, the model is loaded into low precision mode and therefore requires less CUDA memory
To help you get started, we provide a set of combinations of different instance types, hyperparameters, and model types that can be successfully fine-tuned. You can select a configuration as per your requirements and availability of instance types. We fine-tune all three models on a variety of settings with three epochs on a subset of the Dolly dataset with summarization examples.
8B model
Instance Type | Max Input Length | Per Device Batch Size | Int8 Quantization | Enable FSDP | Time Taken (Minutes) |
ml.g4dn.12xlarge | 1024 | 2 | TRUE | FALSE | 202 |
ml.g4dn.12xlarge | 2048 | 2 | TRUE | FALSE | 192 |
ml.g4dn.12xlarge | 1024 | 2 | FALSE | TRUE | 98 |
ml.g4dn.12xlarge | 1024 | 4 | TRUE | FALSE | 200 |
ml.g5.12xlarge | 2048 | 2 | TRUE | FALSE | 73 |
ml.g5.12xlarge | 1024 | 2 | TRUE | FALSE | 88 |
ml.g5.12xlarge | 2048 | 2 | FALSE | TRUE | 24 |
ml.g5.12xlarge | 1024 | 2 | FALSE | TRUE | 35 |
ml.g5.12xlarge | 2048 | 4 | TRUE | FALSE | 72 |
ml.g5.12xlarge | 1024 | 4 | TRUE | FALSE | 83 |
ml.g5.12xlarge | 1024 | 4 | FALSE | TRUE | 25 |
ml.g5.12xlarge | 1024 | 8 | TRUE | FALSE | 83 |
ml.g5.24xlarge | 2048 | 2 | TRUE | FALSE | 73 |
ml.g5.24xlarge | 1024 | 2 | TRUE | FALSE | 86 |
ml.g5.24xlarge | 2048 | 2 | FALSE | TRUE | 24 |
ml.g5.24xlarge | 1024 | 2 | FALSE | TRUE | 35 |
ml.g5.24xlarge | 2048 | 4 | TRUE | FALSE | 72 |
ml.g5.24xlarge | 1024 | 4 | TRUE | FALSE | 83 |
ml.g5.24xlarge | 1024 | 4 | FALSE | TRUE | 25 |
ml.g5.24xlarge | 1024 | 8 | TRUE | FALSE | 82 |
ml.g5.48xlarge | 2048 | 2 | TRUE | FALSE | 73 |
ml.g5.48xlarge | 1024 | 2 | TRUE | FALSE | 87 |
ml.g5.48xlarge | 2048 | 2 | FALSE | TRUE | 27 |
ml.g5.48xlarge | 1024 | 2 | FALSE | TRUE | 48 |
ml.g5.48xlarge | 2048 | 4 | TRUE | FALSE | 71 |
ml.g5.48xlarge | 1024 | 4 | TRUE | FALSE | 82 |
ml.g5.48xlarge | 1024 | 4 | FALSE | TRUE | 32 |
ml.g5.48xlarge | 1024 | 8 | TRUE | FALSE | 81 |
ml.p3dn.24xlarge | 2048 | 2 | TRUE | FALSE | 104 |
ml.p3dn.24xlarge | 1024 | 2 | TRUE | FALSE | 114 |
70B model
Instance Type | Max Input Length | Per Device Batch Size | Int8 Quantization | Enable FSDP | Time Taken (Minutes) |
ml.g5.48xlarge | 1024 | 1 | TRUE | FALSE | 461 |
ml.g5.48xlarge | 2048 | 1 | TRUE | FALSE | 418 |
ml.g5.48xlarge | 1024 | 2 | TRUE | FALSE | 423 |
Recommendations on instance types and hyperparameters
When fine-tuning the model’s accuracy, keep in mind the following:
- Larger models such as 70B provide better performance than 8B
- Performance without Int8 quantization is better than performance with Int8 quantization
Note the following training time and CUDA memory requirements:
- Setting
int8_quantization=True
decreases the memory requirement and leads to faster training. - Decreasing
per_device_train_batch_size
andmax_input_length
reduces the memory requirement and therefore can be run on smaller instances. However, setting very low values may increase the training time. - If you’re not using Int8 quantization (
int8_quantization=False
), use FSDP (enable_fsdp=True
) for faster and efficient training.
When choosing the instance type, consider the following:
- At the time of writing this post, the G5 instances provided the most efficient training among the supported instance types. However, because AWS regularly updates and introduces new instance types, we recommend that you validate the recommended instance type for Meta Llama 3 fine-tuning in the SageMaker documentation or SageMaker console before proceeding.
- Training time largely depends on the amount of GPUs and the CUDA memory available. Therefore, training on instances with the same number of GPUs (for example, ml.g5.2xlarge and ml.g5.4xlarge) is roughly the same. Therefore, you can use the more cost effective instance for training (ml.g5.2xlarge).
To learn about the cost of training per instance, refer to Amazon EC2 G5 Instances.
If your dataset is in instruction tuning format, where each sample consists of an instruction (input) and the desired model response (completion), and these input+completion sequences are short (for example, 50–100 words), using a high value for max_input_length
can lead to poor performance. This is because the model may struggle to focus on the relevant information when dealing with a large number of padding tokens, and it can also lead to inefficient use of computational resources. The default value of -1 corresponds to a max_input_length
of 1024 for Llama models. We recommend setting max_input_length
to a smaller value (for example, 200–400) when working with datasets containing shorter input+completion sequences to mitigate these issues and potentially improve the model’s performance and efficiency.
Lastly, due to the high demand of the G5 instances, you may experience unavailability of these instances in your AWS Region with the error “CapacityError: Unable to provision requested ML compute capacity. Please retry using a different ML instance type.”
If you experience this error, retry the training job or try a different Region.
Issues when fine-tuning large models
In this section, we discuss two issues when fine-tuning very large models.
Disable output compression
By default, the output of a training job is a trained model that is compressed in a .tar.gz format before it’s uploaded to Amazon S3. However, for large models like the 70B model, this compression step can be time-consuming, taking more than 4 hours. To mitigate this delay, it’s recommended to use the disable_output_compression
feature supported by the SageMaker training environment. When disable_output_compression
is set to True
, the model is uploaded without any compression, which can significantly reduce the time taken for large model artifacts to be uploaded to Amazon S3. The uncompressed model can then be used directly for deployment or further processing. The following code shows how to pass this parameter into the SageMaker JumpStart estimator:
SageMaker Studio kernel timeout issue
Due to the size of the Meta Llama 3 70B model, the training job may take several hours to complete. The SageMaker Studio kernel is only used to initiate the training job, and its status doesn’t affect the ongoing training process. After the training job starts, the compute resources allocated for the job will continue running the training process, regardless of whether the SageMaker Studio kernel remains active or times out. If the kernel times out during the lengthy training process, you can still deploy the endpoint after training is complete using the training job name with the following code:
To find the training job name, navigate to the SageMaker console and under Training in the navigation pane, choose Training jobs. Identify the training job name and substitute it in the preceding code.
Clean up
To prevent incurring unnecessary charges, it’s recommended to clean up the deployed resources when you’re done using them. You can remove the deployed model with the following code:
Conclusion
In this post, we discussed fine-tuning Meta Llama 3 models using SageMaker JumpStart. We showed that you can use the SageMaker JumpStart console in SageMaker Studio or the SageMaker Python SDK to fine-tune and deploy these models. We also discussed the fine-tuning technique, instance types, and supported hyperparameters. In addition, we outlined recommendations for optimized training based on various tests we carried out.
The results for fine-tuning the three models over two datasets are shown in the appendix at the end of this post. As we can see from these results, fine-tuning improves summarization compared to non-fine-tuned models.
As a next step, you can try fine-tuning these models on your own dataset using the code provided in the GitHub repository to test and benchmark the results for your use cases.
About the Authors
Ben Friebe is a Senior Solutions Architect at Amazon Web Services, based in Brisbane, Australia. He likes computers.
Pavan Kumar Rao Navule is a Solutions Architect at Amazon Web Services, where he works with ISVs in India to help them innovate on the AWS platform. He is specialized in architecting AI/ML and generative AI services at AWS. Pavan is a published author for the book “Getting Started with V Programming.” In his free time, Pavan enjoys listening to the great magical voices of Sia and Rihanna.
Khush Patel Khush Patel is a Solutions Architect at Amazon Web Services based out of Houston, Texas. He’s passionate about working with customers to deliver business value using technology. He has a multitude of experience with customers working with Machine Learning and GenerativeAI workloads. In his free time, Khush enjoys watching sports and reading.
Dr. Farooq Sabir is a Senior Artificial Intelligence and Machine Learning Specialist Solutions Architect at AWS. He holds PhD and MS degrees in Electrical Engineering from the University of Texas at Austin and an MS in Computer Science from Georgia Institute of Technology. He has over 15 years of work experience and also likes to teach and mentor college students. At AWS, he helps customers formulate and solve their business problems in data science, machine learning, computer vision, artificial intelligence, numerical optimization, and related domains. Based in Dallas, Texas, he and his family love to travel and go on long road trips.
Appendix
This appendix provides additional information about performance benchmarking and dataset formatting.
Performance benchmarking
In this section, we provide results for fine-tuning the two Meta Llama 3 models (8B and 70B) on two different datasets: Dolly and Dialogsum. For the Dolly dataset, our task is to summarize a paragraph of text, whereas for Dialogsum, we are fine-tuning the model to summarize a discussion between two people. In the following tables, we show the input to the model (prompt and instructions), ground truth (summary), response from the pre-trained Meta Llama 3 model, and response from the fine-tuned Meta Llama 3 model for each of the models. We show inference results for five data points. You can notice from the following tables that the summaries improve for both the datasets when we fine-tune the models.
Results for fine-tuning the Meta Llama 3 8B text generation model on the Dolly dataset
Inputs | Ground Truth | Response from Non-Fine-Tuned Model | Response from Fine-Tuned Model |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nCan you generate a 2 lines abstract of this text?nn### Input:nIn physics, gravity (from Latin gravitas ‘weight’) is a fundamental interaction which causes mutual attraction between all things with mass or energy. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. As a result, it has no significant influence at the level of subatomic particles. However, gravity is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light.nnOn Earth, gravity gives weight to physical objects, and the Moon’s gravity is responsible for sublunar tides in the oceans (the corresponding antipodal tide is caused by the inertia of the Earth and Moon orbiting one another). Gravity also has many important biological functions, helping to guide the growth of plants through the process of gravitropism and influencing the circulation of fluids in multicellular organisms. Investigation into the effects of weightlessness has shown that gravity may play a role in immune system function and cell differentiation within the human body.nnThe gravitational attraction between the original gaseous matter in the universe allowed it to coalesce and form stars which eventually condensed into galaxies, so gravity is responsible for many of the large-scale structures in the universe. Gravity has an infinite range, although its effects become weaker as objects get farther away.nnGravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915), which describes gravity not as a force, but as the curvature of spacetime, caused by the uneven distribution of mass, and causing masses to move along geodesic lines. The most extreme example of this curvature of spacetime is a black hole, from which nothing—not even light—can escape once past the black hole’s event horizon However, for most applications, gravity is well approximated by Newton’s law of universal gravitation, which describes gravity as a force causing any two bodies to be attracted toward each other, with magnitude proportional to the product of their masses and inversely proportional to the square of the distance between them.nnnn### Response:n | In general physics theory (Newton), Gravity is a mutual attraction between objects based on their respective masses and the square of the distance which separates them. In the relativity theory (Einstein), it’s a curvature of the spacetime generated by the objects themselves. | Gravity is a fundamental interaction which causes mutual attraction between all things with mass or energy. It is the weakest of the four fundamental interactions, but it is the most significant interaction between objects at the macroscopic scale, and it determines the motion of planets, stars, galaxies, and even light. Gravity gives weight to physical objects, and the Moon’s gravity is responsible for sublunar tides in the oceans. It also has many important biological functions, helping to guide the growth of plants through the process | Gravity is a fundamental interaction which causes mutual attraction between all things with mass or energy. Gravity is, by far, the weakest of the four fundamental interactions, approximately 1038 times weaker than the strong interaction, 1036 times weaker than the electromagnetic force and 1029 times weaker than the weak interaction. Gravity is most accurately described by the general theory of relativity (proposed by Albert Einstein in 1915), which describes gravity not as a force, but as the curvature of spacetime |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nWhat is ProSavinnn### Input:nProSavin is an experimental drug believed to be of use in the treatment of Parkinson’s disease. It is administered to the striatum in the brain, inducing production of dopamine.nnIt is manufactured by Oxford BioMedica. Results from a Phase I/II clinical trial were published in the Lancet and showed safety, but little efficacy. ProSavin was superseded by AXO-Lenti-PD (OXB-102), an optimized version of the drug.nnnn### Response:n | ProSavin is an experimental drug believed to be of use in the treatment of Parkinson’s disease. It is administered to the striatum in the brain, inducing production of dopamine. | ProSavin is an experimental drug believed to be of use in the treatment of Parkinson’s disease. It is administered to the striatum in the brain, inducing production of dopamine.nnIt is manufactured by Oxford BioMedica. Results from a Phase I/II clinical trial were published in the Lancet and showed safety, but little efficacy. ProSavin was superseded by AXO-Lenti-PD (OXB-102), an optimized version of the drug. | ProSavin is an experimental drug believed to be of use in the treatment of Parkinson’s disease. It is administered to the striatum in the brain, inducing production of dopamine.nnIt is manufactured by Oxford BioMedica. Results from a Phase I/II clinical trial were published in the Lancet and showed safety, but little efficacy. ProSavin was superseded by AXO-Lenti-PD (OXB-102), an optimized version of the drug. |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nWho were the Beatles?nn### Input:nThe Beatles were an English rock band, formed in Liverpool in 1960, that comprised John Lennon, Paul McCartney, George Harrison and Ringo Starr. They are regarded as the most influential band of all time and were integral to the development of 1960s counterculture and popular music’s recognition as an art form. Rooted in skiffle, beat and 1950s rock ‘n’ roll, their sound incorporated elements of classical music and traditional pop in innovative ways; the band also explored music styles ranging from folk and Indian music to psychedelia and hard rock. As pioneers in recording, songwriting and artistic presentation, the Beatles revolutionised many aspects of the music industry and were often publicised as leaders of the era’s youth and sociocultural movements.nnnn### Response:n | The Beatles were a rock band comprised of 4 musicians. They were pioneers who had a huge influence on the music industry and wider sociocultural movements. | The Beatles were an English rock band, formed in Liverpool in 1960, that comprised John Lennon, Paul McCartney, George Harrison and Ringo Starr. They are regarded as the most influential band of all time and were integral to the development of 1960s counterculture and popular music’s recognition as an art form. Rooted in skiffle, beat and 1950s rock ‘n’ roll, their sound incorporated elements of classical music and traditional pop in innovative ways; the | The Beatles were an English rock band, formed in Liverpool in 1960, that comprised John Lennon, Paul McCartney, George Harrison and Ringo Starr. They are regarded as the most influential band of all time and were integral to the development of 1960s counterculture and popular music’s recognition as an art form. Rooted in skiffle, beat and 1950s rock ‘n’ roll, their sound incorporated elements of classical music and traditional pop in innovative ways; the |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nSummarize what Alpha Phi is.nn### Input:nAlpha Phi International Women’s Fraternity (ΑΦ, also known as APhi) is an international sorority with 172 active chapters and over 250,000 initiated members.nnFounded at Syracuse University in Syracuse, New York on September 18, 1872, it is the fourth Greek-letter organization founded for women, and the first women’s fraternity founded in the northeast.nnAlpha Phi is a member of the National Panhellenic Conference, the governing council of 26 women’s fraternities. Alpha Phi’s international headquarters are located in Evanston, Illinois.nnnn### Response:n | Alpha Phi is one of the first women’s fraternities founded in the Northeast. They have over 172 active chapters at College campuses and over 250,000 initiated members. Alpha Phi was founded at Syracuse University in Syracuse, New York on September 18, 1872. | Alpha Phi is an international sorority with 172 active chapters and over 250,000 initiated members. Founded at Syracuse University in Syracuse, New York on September 18, 1872, it is the fourth Greek-letter organization founded for women, and the first women’s fraternity founded in the northeast. Alpha Phi is a member of the National Panhellenic Conference, the governing council of 26 women’s fraternities. Alpha Phi’s international headquarters are located in Evanston, Illinois | Alpha Phi is an international sorority with 172 active chapters and over 250,000 initiated members. It was founded at Syracuse University in Syracuse, New York on September 18, 1872, and is the fourth Greek-letter organization founded for women, and the first women’s fraternity founded in the northeast. Alpha Phi is a member of the National Panhellenic Conference, the governing council of 26 women’s fraternities. Alpha Phi’s international headquarters are located in Evanston |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nplease give me a short description about SVB collapsenn### Input:nOn Friday, March 10, 2023, Silicon Valley Bank (SVB) failed after a bank run, marking the second-largest bank failure in United States history and the largest since the 2007–2008 financial crisis. It was one of three March 2023 United States bank failures.nnnn### Response:n | It’s due to risk mismanagement of SVB Bank, where the bank bet on long dated treasury bond. When the bank runs happen, SVB don’t have a choice but to sell those long dated bond at a steep discount and incur huge lossses. | Silicon Valley Bank (SVB) was a bank that failed in 2023. It was one of the largest banks in the United States, and its failure had a significant impact on the financial markets. The bank was founded in 1983 and was headquartered in Santa Clara, California. It was known for its focus on the technology industry and its support of startups and venture capital firms. However, in 2023, the bank faced a number of challenges, including a decline in the value | Silicon Valley Bank (SVB) failed after a bank run, marking the second-largest bank failure in United States history and the largest since the 2007–2008 financial crisis. It was one of three March 2023 United States bank failures. |
Results for fine-tuning the Meta Llama 3 8B text generation model on the Dialogsum dataset
Inputs | Ground Truth | Response from Non-Fine-Tuned Model | Response from Fine-Tuned Model |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Good moring, madam. Can I help you? n#Person2#: Good moring. I am Sunfang. Yesterday Mr. Smith told me to come for a job interview this morning. Is he available now? n#Person1#: Yes, he is waiting for you in Room 123. n#Person2#: Thank you very much. May I come in? n#Person3#: Come in, please. n#Person2#: Good moring, Mr. Smith. I am Sunfang coming for an interview. n#Person3#: Nice to see you. Take a seat, please. n#Person2#: Thank you. n#Person3#: We’ve been looking over your application. I see you’ve got several years experience in secretarial work. What kind of work did you do in your previous job? n#Person2#: I mainly did typing, filing, answering telephone calls and such routine work in the beginning. I started the job as a junior secretary, but within the year I was promoted to senior secretary of Southeast Company. I drafted business correspondence for the general manager. n#Person3#: Then what’s the reason that promoted you to leave Southeast Company? n#Person2#: Last month the company moved to Guangzhou for further development. I was reluctant to relocate because I love Nanjing so much. n#Person3#: Tell me what you know about our company. n#Person2#: Your company is the leading dealer in the import and export of cosmetics. Also, you have a reputation for being a good company to work for and you are in last year’s top 500 Employers list. n#Person3#: Tell me three positive qualities your coworkers would use to describe you and your work style. n#Person2#: First they’d say I am very punctual. They’d also say I am a cheerful person to work with I don’t bring my problems to work. I try to have a positive attitude about work. And thirdly, they’d say I am flexible I accept change easily. n#Person3#: Tell me something about your education. n#Person2#: I graduated from Yangzhou Junior College. Now I am attending evening classes in Nanjing Radio & TV University three times a week. My major is International Trade. I’ve already passed over half of the compulsory courses and five optional courses, and expect to receive a B. A. degree next year. n#Person3#: I see you are an ambitious young lady. I wish you can make greater progress in the future. nnnn### Response:n | #Person1# leads Sunfang to Mr. Smith for an interview. Sunfang tells Mr. Smith she was promoted fast in her previous job. She wants to work for Mr. Smith because her old company moved but she didn’t want to relocate, and because She thinks Mr. Smith’s company plays a leading role. Sunfang thinks her colleagues might describe her as being punctual, positive and flexible. Mr. Smith is impressed. | #Person1#: Good moring, madam. Can I help you? n#Person2#: Good moring. I am Sunfang. Yesterday Mr. Smith told me to come for a job interview this morning. Is he available now? n#Person1#: Yes, he is waiting for you in Room 123. n#Person2#: Thank you very much. May I come in? n#Person3#: Come in, please. n#Person2#: Good moring, Mr. Smith. I am Sunfang coming for an interview. n#Person3#: Nice to see you. Take a seat, please. n#Person2#: Thank you. n#Person3#: We’ve been looking over your application. I see you’ve got several years experience in secretarial work. What kind of work did you do in your previous job? n#Person2#: I mainly did typing, filing, answering telephone calls and such routine work in the beginning. I started the job as a junior secretary, but within the year I was promoted to senior secretary of Southeast Company. I drafted business correspondence for the general manager. n#Person3#: Then what’s the reason that promoted you to leave Southeast Company? n#Person2#: Last month the company moved to Guangzhou for further development. I was reluctant to relocate because I love Nanjing so much. n#Person3#: Tell me what you know about our company. n#Person2#: Your company | Sunfang comes for a job interview with Mr. Smith. She tells Mr. Smith about her previous job and her work experience. She also tells Mr. Smith about her company and her education. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Well, Jenny, the school year is almost over. We just have two more weeks before exams. What do you plan to do this summer?n#Person2#: I’m going to teach English to some immigrants in the university’s community service program.n#Person1#: That sounds interesting. Don’t you need to speak a foreign language for a job like that?n#Person2#: No, you don’t. You just have to present the language simply and give the students a chance to practice speaking.n#Person1#: Come to think of it, that’s the way I was taught to speak Chinese. But speaking didn’t help me learn to read and write Chinese.n#Person2#: My students won’t want to read and write English, at lease not now. They are more interested in speaking.n#Person1#: You sound very knowledgeable about all this. How do you know so much?n#Person2#: I took a Teaching English as a Second Language course last year when you were in China. I’ve also talked with the experienced teachers quite a lot. I think I would like to be an ESL teacher when I graduate.nnnn### Response:n | Jenny plans to teach English to some immigrants in the university’s community service program this summer and she tells #Person1# that students are more interested in speaking. | #Person1#: Well, Jenny, the school year is almost over. We just have two more weeks before exams. What do you plan to do this summer?n#Person2#: I’m going to teach English to some immigrants in the university’s community service program.n#Person1#: That sounds interesting. Don’t you need to speak a foreign language for a job like that?n#Person2#: No, you don’t. You just have to present the language simply and give the students a chance to practice speaking.n#Person1#: Come to think of it, that’s the way I was taught to speak Chinese. But speaking didn’t help me learn to read and write Chinese.n#Person2#: My students won’t want to read and write English, at lease not now. They are more interested in speaking.n#Person1#: You sound very knowledgeable about all this. How do you know so much?n#Person2#: I took a Teaching English as a Second Language course last year when you were in China. I’ve also talked with the experienced teachers quite a lot. I think I would like to be an ESL teacher when I graduate.n | Jenny tells #Person1# she’s going to teach English to some immigrants in the university’s community service program. Jenny thinks speaking English is the most important part and she’s knowledgeable about it. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Can I help you?n#Person2#: Yes, I am looking for a pair of gloves. I think I wear size six.n#Person1#: The gloves are on this counter. Let’s see… How about the blue pair? The color suits you and they are washable, too.n#Person2#: Oh, I’ll try them on. They seem to fit. How much shall I pay you?n#Person1#: They are on sales this week. They have been reduced to twenty-five dollars.n#Person2#: All right. I will take them.n#Person1#: They are very nice.n#Person2#: Thank you.nnnn### Response:n | #Person2# buys a pair of blue gloves of size six with #Person1#’s assistance. | #Person1#: Can I help you?n#Person2#: Yes, I am looking for a pair of gloves. I think I wear size six.n#Person1#: The gloves are on this counter. Let’s see… How about the blue pair? The color suits you and they are washable, too.n#Person2#: Oh, I’ll try them on. They seem to fit. How much shall I pay you?n#Person1#: They are on sales this week. They have been reduced to twenty-five dollars.n#Person2#: All right. I will take them.n#Person1#: They are very nice.n#Person2#: Thank you.n | #Person2# buys a pair of gloves with #Person1#’s help. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: How is your business on the Internet, Steven?n#Person2#: Don’t mention it. I lost a bunch of money on it this year. I think it is on the brink of bankruptcy.n#Person1#: I am sorry to hear that. It shows that you should buy companies, not ideas.n#Person2#: I changed my strategy around too. I am investing in good companies now.n#Person1#: Yes, I agree with you. I am going to find a good company and invest in it soon.n#Person2#: It’s a good idea. But please be careful when you invest this time.n#Person1#: Yes, I will. I should learn more from you. After all I am green hand on it.n#Person2#: If you have any question, you can ask me.n#Person1#: OK. My wife sent me a message and asked me to go home just now. I am leaving.n#Person2#: All right. Let’s chat next time. Bye-bye.n#Person1#: Bye.nnnn### Response:n | Steven lost much money on Internet business so he’s now investing in good companies. #Person1# wants to invest too but #Person1# is green hand on investment. Steven’s willing to help. | #Person1#: How is your business on the Internet, Steven?n#Person2#: Don’t mention it. I lost a bunch of money on it this year. I think it is on the brink of bankruptcy.n#Person1#: I am sorry to hear that. It shows that you should buy companies, not ideas.n#Person2#: I changed my strategy around too. I am investing in good companies now.n#Person1#: Yes, I agree with you. I am going to find a good company and invest in it soon.n#Person2#: It’s a good idea. But please be careful when you invest this time.n#Person1#: Yes, I will. I should learn more from you. After all I am green hand on it.n#Person2#: If you have any question, you can ask me.n#Person1#: OK. My wife sent me a message and asked me to go home just now. I am leaving.n#Person2#: All right. Let’s chat next time. Bye-bye.n#Person1#: Bye.n | Steven lost a bunch of money on the Internet and changed his strategy to invest in good companies. #Person1# agrees and will be careful when investing. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Which language do you speak?n#Person2#: Well, I studied French at school, but I’ve forgotten most of it. I learnt some Russian while I was studying there and now I study Spanish in evening classes. Do you speak Japanese at home with your parents.n#Person1#: Yes, I do. I’m learning Chinese now on the internet.n#Person2#: How do you find studying a language on the internet?n#Person1#: It’s not easy, particularly with listening and speaking, because sometimes the connection isn’t too good.n#Person2#: I bet you find the character difficult to write!n#Person1#: Actually, many of the characters are the same or very similar in both Chinese and Japanese, so they’re easy for me.n#Person2#: So, you can read and write much better than you speak and understand other speakers, right?n#Person1#: Yes. How are the Spanish classes going?n#Person2#: In a way, it’s similar to your experience of learning Chinese. Some Spanish words are very similar to English ones. But I need to practise listening to native speakers and saying the words myself.nnnn### Response:n | #Person1# is learning Chinese on the internet and finds listening and speaking difficult, but thinks writing is easy. #Person2# thinks #Person2#’s experience of learning Spanish is similar to #Person1#’s. | #Person1#: Which language do you speak?n#Person2#: Well, I studied French at school, but I’ve forgotten most of it. I learnt some Russian while I was studying there and now I study Spanish in evening classes. Do you speak Japanese at home with your parents.n#Person1#: Yes, I do. I’m learning Chinese now on the internet.n#Person2#: How do you find studying a language on the internet?n#Person1#: It’s not easy, particularly with listening and speaking, because sometimes the connection isn’t too good.n#Person2#: I bet you find the character difficult to write!n#Person1#: Actually, many of the characters are the same or very similar in both Chinese and Japanese, so they’re easy for me.n#Person2#: So, you can read and write much better than you speak and understand other speakers, right?n#Person1#: Yes. How are the Spanish classes going?n#Person2#: In a way, it’s similar to your experience of learning Chinese. Some Spanish words are very similar to English ones. But I need to practise listening to native speakers and saying the words myself.n | #Person1# and #Person2# are talking about the languages they speak. #Person1# is learning Chinese on the internet and #Person2# is learning Spanish. They both find it difficult to learn the language. |
Results for fine-tuning the Meta Llama 3 70B text generation model on the Dolly dataset
Inputs | Ground Truth | Response from Non-Fine-Tuned Model | Response from Fine-Tuned Model |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nCan you give me a short summary of the follow paragraph about the history of electronic music?nn### Input:nDuring the 1960s, digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and turntables, through the emergence of genres such as disco, krautrock, new wave, synth-pop, hip hop, and EDM. In the early 1980s mass-produced digital synthesizers, such as the Yamaha DX7, became popular, and MIDI (Musical Instrument Digital Interface) was developed. In the same decade, with a greater reliance on synthesizers and the adoption of programmable drum machines, electronic popular music came to the fore. During the 1990s, with the proliferation of increasingly affordable music technology, electronic music production became an established part of popular culture. In Berlin starting in 1989, the Love Parade became the largest street party with over 1 million visitors, inspiring other such popular celebrations of electronic music.nnnn### Response:n | With the advent of new electronic instruments in the 60s and 70s, namely synthesizers and drum machines, electronic music quickly gained mainstream popularity through genres like disco and hip hop. As the technology progressed through the 80s and 90s, more affordable instruments led electronic music to be even more widely embraced, cementing its status in pop culture and leading to the rise of large electronic music events around the globe. | The history of electronic music can be traced back to the 1960s, when digital computer music was pioneered, innovation in live electronics took place, and Japanese electronic musical instruments began to influence the music industry. In the early 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. The 1970s also saw electronic music begin to have a significant influence on popular music, with the adoption of polyphonic synthesizers, electronic drums, drum machines, and | Electronic music has been around since the 1960s. In the 1970s, Moog synthesizers and drum machines helped popularize synthesized electronic music. In the 1980s, digital synthesizers became popular and MIDI was developed. In the 1990s, electronic music production became an established part of popular culture. |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nWho was Nike in Greek Mythology?nn### Input:nIn Greek mythology, Nike (/ˈnaɪki/ (listen); Ancient Greek: Νίκη, lit. ‘victory’, ancient: [nǐː.kɛː], modern: [ˈni.ci]) was a goddess who personified victory in any field including art, music, war, and athletics. She is often portrayed in Greek art as Winged Victory in the motion of flight; however, she can also appear without wings as “Wingless Victory” when she is being portrayed as an attribute of another deity such as Athena. In Greek literature Nike is described as both an attribute and attendant to the gods Zeus and Athena. Nike gained this honored role beside Zeus during the Titanomachy where she was one of the first gods to offer her allegiance to Zeus. At Athens, Nike became a servant to Athena as well as an attribute of her due to the prominent status Athena held in her patron city. The fusion of the two goddesses at Athens has contributed to the ambiguity surrounding Nike’s origins. It is unclear whether she originated from a character trait of the Greek goddess Athena or has always existed as an independent deity. Her origin story in Greek mythology is also slightly ambiguous, with the Theogony claiming Nike to be the daughter of Styx and Pallas while the Homeric Hymns describe Ares as being Nike’s father. Her Roman equivalent was Victoria.nnnn### Response:n | Nike is the god of ‘victory’ in Greek mythology, and is often portrayed in art Winged Victory in the motion of flight. | Nike was the Greek goddess of victory. She was the daughter of the Titan Pallas and the goddess Styx, and the sister of Kratos (Strength), Bia (Force), and Zelus (Rivalry). She was depicted as a winged goddess, and was often shown holding a wreath of victory or a palm branch. She was also associated with the goddess Athena, and was often depicted as her companion. Nike was worshipped throughout Greece, and her cult was particularly strong | Nike was the Greek goddess of victory. She was often portrayed in Greek art as Winged Victory in the motion of flight. Nike gained this honored role beside Zeus during the Titanomachy where she was one of the first gods to offer her allegiance to Zeus. At Athens, Nike became a servant to Athena as well as an attribute of her due to the prominent status Athena held in her patron city. |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nGive me some details about the MV Cape Race pleasenn### Input:nThe MV Cape Race (T-AKR-9960) was originally built as the MV Seaspeed America for the Transocean RO-RO Corp. She was sold to various other corporations until she was purchased by the United States Navy in April 1993. She underwent conversion to fit a more active role in and serve a military need better and was placed back in service April 1993. She has served on multiple occasions but is currently in the US Ready Reserve Fleet.nnnn### Response:n | The MV Cape Race (T-AKR-9960) was originally built as the MV Seaspeed America for the Transocean RO-RO Corp. She was sold to various other corporations until she was purchased by the United States Navy in April 1993. nnShe underwent conversion to fit a more active role in and serve a military need better and was placed back in service April 1993. She has served on multiple occasions but is currently in the US Ready Reserve Fleet. | The MV Cape Race (T-AKR-9960) was originally built as the MV Seaspeed America for the Transocean RO-RO Corp. She was sold to various other corporations until she was purchased by the United States Navy in April 1993. She underwent conversion to fit a more active role in and serve a military need better and was placed back in service April 1993. She has served on multiple occasions but is currently in the US Ready Reserve Fleet. | The MV Cape Race (T-AKR-9960) was originally built as the MV Seaspeed America for the Transocean RO-RO Corp. She was sold to various other corporations until she was purchased by the United States Navy in April 1993. She underwent conversion to fit a more active role in and serve a military need better and was placed back in service April 1993. She has served on multiple occasions but is currently in the US Ready Reserve Fleet. |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nWhat is sweep rowing?nn### Input:nRowing, sometimes called crew in the United States, is the sport of racing boats using oars. It differs from paddling sports in that rowing oars are attached to the boat using oarlocks, while paddles are not connected to the boat. Rowing is divided into two disciplines: sculling and sweep rowing. In sculling, each rower holds two oars—one in each hand, while in sweep rowing each rower holds one oar with both hands. There are several boat classes in which athletes may compete, ranging from single sculls, occupied by one person, to shells with eight rowers and a coxswain, called eights. There are a wide variety of course types and formats of racing, but most elite and championship level racing is conducted on calm water courses 2 kilometres (1.2 mi) long with several lanes marked using buoys.nnnn### Response:n | Sweep rowing is a rowing discipline in which rowers hold one oar with both hands. | Sweep rowing is a type of rowing where each rower holds one oar with both hands. There are several boat classes in which athletes may compete, ranging from single sculls, occupied by one person, to shells with eight rowers and a coxswain, called eights. There are a wide variety of course types and formats of racing, but most elite and championship level racing is conducted on calm water courses 2 kilometres (1.2 mi) long with several lanes | Sweep rowing is a type of rowing where each rower holds one oar with both hands. |
Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.nn### Instruction:nWhat is the Pandemis corylanann### Input:nPandemis corylana, the chequered fruit-tree tortrix, hazel tortrix moth, filbert tortricid or barred fruit tree moth, is a moth of the family Tortricidae. It is found from northern and central Europe to Siberia, Korea and Japan.nnnn### Response:n | The Pandemis corylana, the chequered fruit-tree tortrix, hazel tortrix moth, filbert tortricid or barred fruit tree moth, is a moth of the family Tortricidae. It is found from northern and central Europe to Siberia, Korea and Japan. | Pandemis corylana, the chequered fruit-tree tortrix, hazel tortrix moth, filbert tortricid or barred fruit tree moth, is a moth of the family Tortricidae. It is found from northern and central Europe to Siberia, Korea and Japan. | The Pandemis corylana, also known as the chequered fruit-tree tortrix, hazel tortrix moth, filbert tortricid or barred fruit tree moth, is a moth of the family Tortricidae. It is found from northern and central Europe to Siberia, Korea and Japan. |
Results for fine-tuning the Meta Llama 3 70B text generation model on the Dialogsum dataset
Inputs | Ground Truth | Response from Non-Fine-Tuned Model | Response from Fine-Tuned Model |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Would you like some more coffee, James?n#Person2#: Oh, yes, please. Another day’s work. I like the job, but I hate traveling around London.n#Person1#: Oh, come on, James. How do you get to work?n#Person2#: I walk to the station and take the underground. And sometimes I take the taxi if I get up late.n#Person1#: How long does the journey take?n#Person2#: It takes about 45 minutes on a good day. It’s just awful.n#Person1#: Well, how much does it cost?n#Person2#: It’s twenty pounds a week. I think that’s quite a lot.n#Person1#: Oh, James. Stop complaining. And hurry up. You’ll be late.nnnn### Response:n | James likes the job but hates the commute because it takes a long time and costs a lot. | #Person1#: Would you like some more coffee, James?n#Person2#: Oh, yes, please. Another day’s work. I like the job, but I hate traveling around London.n#Person1#: Oh, come on, James. How do you get to work?n#Person2#: I walk to the station and take the underground. And sometimes I take the taxi if I get up late.n#Person1#: How long does the journey take?n#Person2#: It takes about 45 minutes on a good day. It’s just awful.n#Person1#: Well, how much does it cost?n#Person2#: It’s twenty pounds a week. I think that’s quite a lot.n#Person1#: Oh, James. Stop complaining. And hurry up. You’ll be late.nn#Person1# asked #Person2# if he would like some more coffee. #Person2# said yes and that he liked his job but hated traveling around London. #Person1# asked how he got to work. #Person2# said he walked to the station and took the underground, and sometimes took a taxi if he got up late. #Person1# asked how long the journey took. #Person2# said it took about 45 minutes on a good day and that it was awful. #Person1# asked how much it cost. #Person2# said it was twenty pounds a week and that he thought that was quite a lot | James complains to #Person1# about the traveling around London. #Person1# asks him to hurry up. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: What is so special about tomorrow?n#Person2#: Tomorrow is October 24th, United Nations Day.n#Person1#: Why is October 24th United Nations Day?n#Person2#: That’s the date in 1945 when the United Nations began.n#Person1#: How does your school celebrate United Nations Day?n#Person2#: The UN Day celebration includes the songs and dances of different countries in the United Nations and a traditional dress parade. Our school library displays students’ works of art from different countries and cultures. We also hold a ‘Tastes from Around the World’ food festival.n#Person1#: What’s the purpose of the celebration?n#Person2#: To help us students learn about other cultures and understand and respect other cultures.n#Person1#: It is really worthwhile.nnnn### Response:n | #Person1# and #Person2# talk about the United Nations Day. #Person2# tells #Person1# how the school will celebrate the day and the purpose of it. | #Person1#: What is so special about tomorrow?n#Person2#: Tomorrow is October 24th, United Nations Day.n#Person1#: Why is October 24th United Nations Day?n#Person2#: That’s the date in 1945 when the United Nations began.n#Person1#: How does your school celebrate United Nations Day?n#Person2#: The UN Day celebration includes the songs and dances of different countries in the United Nations and a traditional dress parade. Our school library displays students’ works of art from different countries and cultures. We also hold a ‘Tastes from Around the World’ food festival.n#Person1#: What’s the purpose of the celebration?n#Person2#: To help us students learn about other cultures and understand and respect other cultures.n#Person1#: It is really worthwhile.n | #Person2# tells #Person1# United Nations Day is October 24th. #Person2#’s school celebrates it to help students understand and respect other cultures. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Have you heard about Alice and David?n#Person2#: No, have they had another fight?n#Person1#: No, they’ve got engaged.n#Person2#: You must be joking. Those two?n#Person1#: Well, my dear, I didn’t believe it either. But I got it straight from the horse’s mouth. David called me this morning.n#Person2#: So when did this all happen?n#Person1#: Last weekend, while they were on their ski trip.n#Person2#: Well, I believe it now, and when are they are getting married?n#Person1#: Next June.n#Person2#: I can hardly believe it.nnnn### Response:n | #Person1# and #Person2# can’t believe that Alice and David, who fought with each other, got engaged. | #Person1# told #Person2# that Alice and David had got engaged. #Person2# was surprised and asked when it had happened. #Person1# said that it had happened last weekend, while they were on their ski trip. #Person2# asked when they were getting married. #Person1# said that they were getting married next June. | #Person1# tells #Person2# that Alice and David have got engaged. #Person2# is surprised. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: We did a survey on your job performance.n#Person2#: How did I do?n#Person1#: Not so good. Lots of complains.n#Person2#: Really? I thought I was doing well.n#Person1#: Your performance so far is not quite in line with what we expect for our executives. I hope I’Ve made my point.nnnn### Response:n | #Person1# tells #Person2# #Person2#’s job performance isn’t satisfying, despite #Person2#’s opposite opinion. | #Person1# told #Person2# that they did a survey on their job performance. #Person2# asked how they did, and #Person1# said that they did not do so good and that there were lots of complains. #Person2# was surprised and said that they thought they were doing well. #Person1# said that their performance so far was not quite in line with what they expect for their executives, and they hoped that they had made their point. | #Person1# tells #Person2# that #Person2#’s job performance is not good. |
Below is a Instruction that holds conversation which describes discussion between two people.Write a response that appropriately summarizes the conversation.nn### Instruction:n#Person1#: Hi, Francis, morning! How are you doing today?n#Person2#: I had a bad start of the day. I was stuck in a bad traffic jam on my way and I missed my morning meeting.n#Person1#: Indeed, I agree that is not a good start. How about the meeting? You missed anything important?n#Person2#: It is ok. My boss briefs me later on.n#Person1#: Was he unhappy with you not attending the meeting?n#Person2#: Err, kind of. You know this traffic jam is really bad with so many new cars hitting the streets these days.nnnn### Response:n | Francis was stuck in a traffic jam and missed the morning meeting. His boss briefs him. | #Person1#: Hi, Francis, morning! How are you doing today?n#Person2#: I had a bad start of the day. I was stuck in a bad traffic jam on my way and I missed my morning meeting.n#Person1#: Indeed, I agree that is not a good start. How about the meeting? You missed anything important?n#Person2#: It is ok. My boss briefs me later on.n#Person1#: Was he unhappy with you not attending the meeting?n#Person2#: Err, kind of. You know this traffic jam is really bad with so many new cars hitting the streets these days.nn#Person1# asked #Person2# how he was doing today. #Person2# said he had a bad start of the day because he was stuck in a bad traffic jam on his way and he missed his morning meeting. #Person1# agreed that it was not a good start and asked if #Person2# missed anything important in the meeting. #Person2# said it was ok because his boss would brief him later on. #Person1# asked if his boss was unhappy with him not attending the meeting. #Person2# said he was kind of unhappy because the traffic jam was really bad with so many new cars hitting the streets these days. | Francis tells #Person1# he missed the meeting because of the traffic jam. |
Dataset formatting
We currently offer two types of fine-tuning: instruction fine-tuning and domain adaption fine-tuning. You can switch to one of the training methods by specifying the parameter instruction_tuned
as True
or False
.
Domain adaption format
The text generation model can be fine-tuned on any domain-specific dataset to incorporate domain-specific knowledge and language patterns. After fine-tuning on the domain-specific dataset, the model is expected to generate more relevant and accurate text within that domain. Although few-shot prompting can also guide the model towards domain-specific generation, the fine-tuning process plays a crucial role in adapting the model’s understanding and generation capabilities to the target domain. The combination of fine-tuning on domain data and effective prompting techniques can enable the model to perform various NLP tasks within that specific domain more effectively.
For input to the model, use a training and optional validation directory. Each directory contains a CSV, JSON, or TXT file. For CSV and JSON files, the train or validation data is used from the column called text
or the first column if no column called text
is found. The number of files under train and validation (if provided) should equal to 1, respectively.
The output is a trained model that can be deployed for inference.
The following is an example of a TXT file for fine-tuning the text generation model. The TXT file is SEC filings of Amazon from 2021–2022:
Instruction fine-tuning
The text generation model can be instruction-tuned on any text data provided that the data is in the expected format. The instruction-tuned model can be further deployed for inference.
For input, use a training and optional validation directory. The train and validation directories should contain one or multiple JSON lines (.jsonl) formatted files. In particular, the train directory can also contain an optional *.json file describing the input and output formats.
The best model is selected according to the validation loss, calculated at the end of each epoch. If a validation set is not given, an (adjustable) percentage of the training data is automatically split and used for validation.
The training data must be formatted in a JSON lines (.jsonl) format, where each line is a dictionary representing a single data sample. All training data must be in a single folder; however, it can be saved in multiple .jsonl files. The .jsonl file extension is mandatory. The training folder can also contain a template.json
file describing the input and output formats. If no template file is given, the following template will be used:
In this case, the data in the JSON lines entries must include prompt
and completion
fields. If a custom template is provided, it must also use prompt
and completion
keys to define the input and output templates. The following is a sample custom template:
Here, the data in the JSON lines entries must include the question
, context
, and answer
fields.
The output is a trained model that can be deployed for inference.
We provide a subset of SEC filings data of Amazon. It is downloaded from publicly available EDGAR. For instructions on accessing the data, refer to Accessing EDGAR Data.
License: Creative Commons Attribution-ShareAlike License (CC BY-SA 4.0)
Ground truth curation and metric interpretation best practices for evaluating generative AI question answering using FMEval
Generative artificial intelligence (AI) applications powered by large language models (LLMs) are rapidly gaining traction for question answering use cases. From internal knowledge bases for customer support to external conversational AI assistants, these applications use LLMs to provide human-like responses to natural language queries. However, building and deploying such assistants with responsible AI best practices requires a robust ground truth and evaluation framework to make sure they meet quality standards and user experience expectations, as well as clear evaluation interpretation guidelines to make the quality and responsibility of these systems intelligible to business decision-makers.
This post focuses on evaluating and interpreting metrics using FMEval for question answering in a generative AI application. FMEval is a comprehensive evaluation suite from Amazon SageMaker Clarify, providing standardized implementations of metrics to assess quality and responsibility. To learn more about FMEval, refer to Evaluate large language models for quality and responsibility.
In this post, we discuss best practices for working with FMEval in ground truth curation and metric interpretation for evaluating question answering applications for factual knowledge and quality. Ground truth data in AI refers to data that is known to be true, representing the expected outcome for the system being modeled. By providing a true expected outcome to measure against, ground truth data unlocks the ability to deterministically evaluate system quality. Ground truth curation and metric interpretation are tightly coupled, and the implementation of the evaluation metric must inform ground truth curation to achieve best results. By following these guidelines, data scientists can quantify the user experience delivered by their generative AI pipelines and communicate meaning to business stakeholders, facilitating ready comparisons across different architectures, such as Retrieval Augmented Generation (RAG) pipelines, off-the-shelf or fine-tuned LLMs, or agentic solutions.
Solution overview
We use an example ground truth dataset (referred to as the golden dataset, shown in the following table) of 10 question-answer-fact triplets. Each triplet describes a fact, and an encapsulation of the fact as a question-answer pair to emulate an ideal response, derived from a knowledge source document. We used Amazon’s Q2 2023 10Q report as the source document from the SEC’s public EDGAR dataset to create 10 question-answer-fact triplets. The 10Q report contains details on company financials and operations over the Q2 2023 business quarter. The golden dataset applies the ground truth curation best practices discussed in this post for most questions, but not all, to demonstrate the downstream impact of ground truth curation on metric results.
Question | Answer | Fact |
Who is Andrew R. Jassy? | Andrew R. Jassy is the President and Chief Executive Officer of Amazon.com, Inc. | Chief Executive Officer of Amazon<OR>CEO of Amazon<OR>President of Amazon |
What were Amazon’s total net sales for the second quarter of 2023? | Amazon’s total net sales for the second quarter of 2023 were $134.4 billion. | 134.4 billion<OR>134,383 million<OR>134183 million<OR>134.383 billion |
Where is Amazon’s principal office located? | Amazon’s principal office is located at 410 Terry Avenue North, Seattle, Washington 98109-5210. | 410 Terry Avenue North |
What was Amazon’s operating income for the six months ended June 30, 2023? | Amazon’s operating income for the six months ended June 30, 2023 was $12.5 billion. | 12.5 billion<OR>12,455 million<OR>12.455 billion |
When did Amazon acquire One Medical? | Amazon acquired One Medical on February 22, 2023 for cash consideration of approximately $3.5 billion, net of cash acquired. | Feb 22 2023<OR>February 22nd 2023<OR>2023-02-22<OR>February 22, 2023 |
What was a key challenge faced by Amazon’s business in the second quarter of 2023? | Changes in foreign exchange rates reduced Amazon’s International segment net sales by $180 million for Q2 2023. | foreign exchange rates |
What was Amazon’s total cash, cash equivalents and restricted cash as of June 30, 2023? | Amazon’s total cash, cash equivalents, and restricted cash as of June 30, 2023 was $50.1 billion. | 50.1 billion<OR>50,067 million<OR>50.067 billion |
What were Amazon’s AWS sales for the second quarter of 2023? | Amazon’s AWS sales for the second quarter of 2023 were $22.1 billion. | 22.1 billion<OR>22,140 million<OR>22.140 billion<OR>22140 million |
As of June 30, 2023, how many shares of Rivian’s Class A common stock did Amazon hold? | As of June 30, 2023, Amazon held 158 million shares of Rivian’s Class A common stock. | 158 million |
How many shares of common stock were outstanding as of July 21, 2023? | There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023. | 10317750796<OR>10,317,750,796 |
We generated responses from three generative AI RAG pipelines (anonymized as Pipeline1, Pipeline2, Pipeline3, as shown in the following figure) and calculated factual knowledge and QA accuracy metrics, evaluating them against the golden dataset. The fact key of the triplet is used for the Factual Knowledge metric ground truth, and the answer key is used for the QA Accuracy metric ground truth. With this, factual knowledge is measured against the fact key, and the ideal user experience in terms of style and conciseness is measured against the question-answer pairs.
Evaluation for question answering in a generative AI application
A generative AI pipeline can have many subcomponents, such as a RAG pipeline. RAG is a methodology to improve the accuracy of LLM responses answering a user query by retrieving and inserting relevant domain knowledge into the language model prompt. RAG quality depends on the configurations of the retriever (chunking, indexing) and generator (LLM selection and hyperparameters, prompt), as illustrated in the following figure. Tuning chunking and indexing in the retriever makes sure the correct content is available in the LLM prompt for generation. The chunk size and chunk splitting method, as well as the means of embedding and ranking relevant document chunks as vectors in the knowledge store, impacts whether the actual answer to the query is ultimately inserted in the prompt. In the generator, selecting an appropriate LLM to run the prompt, and tuning its hyperparameters and prompt template, all control how the retrieved information is interpreted for the response. With this, when a final response from a RAG pipeline is evaluated, the preceding components may be adjusted to improve response quality.
Alternatively, question answering can be powered by a fine-tuned LLM, or through an agentic approach. Although we demonstrate the evaluation of final responses from RAG pipelines, the final responses from a generative AI pipeline for question answering can be similarly evaluated because the prerequisites are a golden dataset and the generative answers. With this approach, changes in the generative output due to different generative AI pipeline architectures can be evaluated to inform the best design choices (comparing RAG and knowledge retrieval agents, comparing LLMs used for generation, retrievers, chunking, prompts, and so on).
Although evaluating each sub-component of a generative AI pipeline is important in development and troubleshooting, business decisions rely on having an end-to-end, side-by-side data view, quantifying how a given generative AI pipeline will perform in terms of user experience. With this, business stakeholders can understand expected quality changes in terms of end-user experience by switching LLMs, and adhere to legal and compliance requirements, such as ISO42001 AI Ethics. There are further financial benefits to realize; for example, quantifying expected quality changes on internal datasets when switching a development LLM to a cheaper, lightweight LLM in production. The overall evaluation process for the benefit of decision-makers is outlined in the following figure. In this post, we focus our discussion on ground truth curation, evaluation, and interpreting evaluation scores for entire question answering generative AI pipelines using FMEval to enable data-driven decision-making on quality.
A useful mental model for ground truth curation and improvement of a golden dataset is a flywheel, as shown in the following figure. The ground truth experimentation process involves querying your generative AI pipeline with the initial golden dataset questions and evaluating the responses against initial golden answers using FMEval. Then, the quality of the golden dataset must be reviewed by a judge. The judge review of the golden dataset quality accelerates the flywheel towards an ever-improving golden dataset. The judge role in the workflow can be assumed by another LLM to enable scaling against established, domain-specific criteria for high-quality ground truth. Maintaining a human-in-the-loop component to the judge function remains essential to sample and verify results, as well as to increase the quality bar with increasing task complexity. Improvement to the golden dataset fosters improvement to the quality of the evaluation metrics, until sufficient measurement accuracy in the flywheel is met by the judge, using the established criteria for quality. To learn more about AWS offerings on human review of generations and data labeling, such as Amazon Augmented AI (Amazon A2I) and Amazon SageMaker Ground Truth Plus, refer to Using Amazon Augmented AI for Human Review and High-quality human feedback for your generative AI applications from Amazon SageMaker Ground Truth Plus. When using LLMs as a judge, make sure to apply prompt safety best practices.
However, to conduct reviews of golden dataset quality as part of the ground truth experiment flywheel, human reviewers must understand the evaluation metric implementation and its coupling to ground truth curation.
FMEval metrics for question answering in a generative AI application
The Factual Knowledge and QA Accuracy metrics from FMEval provide a way to evaluate custom question answering datasets against ground truth. For a full list of metrics implemented with FMEval, refer to Using prompt datasets and available evaluation dimensions in model evaluation jobs.
Factual Knowledge
The Factual Knowledge metric evaluates whether the generated response contains factual information present in the ground truth answer. It is a binary (0 or 1) score based on a string match. Factual knowledge also reports a quasi-exact string match which performs matching after normalization. For simplicity, we focus on the exact match Factual Knowledge score in this post.
For each golden question:
- 0 indicates the lowercased factual ground truth is not present in the model response
- 1 indicates the lowercased factual ground truth is present in the response
QA Accuracy
The QA Accuracy metric measures a model’s question answering accuracy by comparing its generated answers against ground truth answers. The metrics are computed by string matching true positive, false positive, and false negative word matches between QA ground truth answers and generated answers.
It includes several sub-metrics:
- Recall Over Words – Scores from 0 (worst) to 1 (best), measuring how much of the QA ground truth is contained in the model output
- Precision Over Words – Scores from 0 (worst) to 1 (best), measuring how many words in the model output match the QA ground truth
- F1 Over Words – The harmonic mean of precision and recall, providing a balanced score from 0 to 1
- Exact Match – Binary 0 or 1, indicating if the model output exactly matches the QA ground truth
- Quasi Exact Match – Similar to Exact Match, but with normalization (lowercasing and removing articles)
Because QA Accuracy metrics are calculated on an exact match basis, (for more details, see Accuracy) they may be less reliable for questions where the answer can be rephrased without modifying its meaning. To mitigate this, we propose applying Factual Knowledge as the assessment of factual correctness, motivating the use of a dedicated factual ground truth with minimal word expression, together with QA Accuracy as a measure of idealized user experience in terms of response verbosity and style. We elaborate on these concepts later in this post. The BERTScore is also computed as part of QA Accuracy, which provides a measure of semantic match quality against the ground truth.
Proposed ground truth curation best practices for question answering with FMEval
In this section, we share best practices for curating your ground truth for question answering with FMEval.
Understanding the Factual Knowledge metric calculation
A factual knowledge score is a binary measure of whether a real-world fact was correctly retrieved by the generative AI pipeline. 0 indicates the lower-cased expected answer is not part of the model response, whereas 1 indicates it is. Where there is more than one acceptable answer, and either answer is considered correct, apply a logical operator for OR. A configuration for a logical AND can also be applied for cases where the factual material encompasses multiple distinct entities. In the present examples, we demonstrate a logical OR, using the <OR> delimiter. See Use SageMaker Clarify to evaluate large language models for information about logical operators. An example curation of a golden question and golden fact is shown in the following table.
Golden Question | “How many shares of common stock were outstanding as of July 21, 2023?” |
Golden Fact | 10,317,750,796<OR>10317750796 |
Fact detection is useful for assessing hallucination in a generative AI pipeline. The two sample responses in the following table illustrate fact detection. The first example correctly states the fact in the example response, and receives a 1.0 score. The second example hallucinates a number instead of stating the fact, and receives a 0 score.
Metric | Example Response | Score | Calculation Approach |
Factual Knowledge | “Based on the documents provided, Amazon had 10,317,750,796 shares of common stock outstanding as of July 21, 2023.” | 1.0 | String match to golden fact |
“Based on the documents provided, Amazon had 22,003,237,746 shares of common stock outstanding as of July 21, 2023.” | 0.0 |
In the following example, we highlight the importance of units in ground truth for Factual Knowledge string matching. The golden question and golden fact represent Amazon’s total net sales for the second quarter of 2023.
Golden Question | “What were Amazon’s total net sales for the second quarter of 2023? |
Golden Fact | 134.4 billion<OR>134,383 million |
The first response hallucinates the fact, using units of billions, and correctly receives a score of 0.0. The second response correctly represents the fact, in units of millions. Both units should be represented in the golden fact. The third response was unable to answer the question, flagging a potential issue with the information retrieval step.
Metric | Example Response | Score | Calculation Approach |
Factual Knowledge | Amazon’s total net sales for the second quarter of 2023 were $170.0 billion. | 0.0 | String match to golden fact |
The total consolidated net sales for Q2 2023 were $134,383 million according to this report. | 1.0 | ||
Sorry, the provided context does not include any information about Amazon’s total net sales for the second quarter of 2023. Would you like to ask another question? | 0.0 |
Interpreting Factual Knowledge scores
Factual knowledge scores are a useful flag for challenges in the generative AI pipeline such as hallucination or information retrieval problems. Factual knowledge scores can be curated in the form of a Factual Knowledge Report for human review, as shown in the following table, to visualize pipeline quality in terms of fact detection side by side.
User Question | QA Ground Truth | Factual Ground Truth | Pipeline 1 | Pipeline 2 | Pipeline 3 |
As of June 30, 2023, how many shares of Rivian’s Class A common stock did Amazon hold? | As of June 30, 2023, Amazon held 158 million shares of Rivian’s Class A common stock. | 158 million | 1 | 1 | 1 |
How many shares of common stock were outstanding as of July 21, 2023? | There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023. | 10317750796<OR>10,317,750,796 | 1 | 1 | 1 |
What was Amazon’s operating income for the six months ended June 30, 2023? | Amazon’s operating income for the six months ended June 30, 2023 was $12.5 billion. | 12.5 billion<OR>12,455 million<OR>12.455 billion | 1 | 1 | 1 |
What was Amazon’s total cash, cash equivalents and restricted cash as of June 30, 2023? | Amazon’s total cash, cash equivalents, and restricted cash as of June 30, 2023 was $50.1 billion. | 50.1 billion<OR>50,067 million<OR>50.067 billion | 1 | 0 | 0 |
What was a key challenge faced by Amazon’s business in the second quarter of 2023? | Changes in foreign exchange rates reduced Amazon’s International segment net sales by $180 million for Q2 2023. | foreign exchange rates | 0 | 0 | 0 |
What were Amazon’s AWS sales for the second quarter of 2023? | Amazon’s AWS sales for the second quarter of 2023 were $22.1 billion. | 22.1 billion<OR>22,140 million<OR>22.140 billion<OR>22140 million | 1 | 0 | 0 |
What were Amazon’s total net sales for the second quarter of 2023? | Amazon’s total net sales for the second quarter of 2023 were $134.4 billion. | 134.4 billion<OR>134,383 million<OR>134183 million<OR>134.383 billion | 1 | 0 | 0 |
When did Amazon acquire One Medical? | Amazon acquired One Medical on February 22, 2023 for cash consideration of approximately $3.5 billion, net of cash acquired. | Feb 22 2023<OR>February 22nd 2023<OR>2023-02-22<OR>February 22, 2023 | 1 | 0 | 1 |
Where is Amazon’s principal office located? | Amazon’s principal office is located at 410 Terry Avenue North, Seattle, Washington 98109-5210. | 410 Terry Avenue North | 0 | 0 | 0 |
Who is Andrew R. Jassy? | Andrew R. Jassy is the President and Chief Executive Officer of Amazon.com, Inc. | Chief Executive Officer of Amazon<OR>CEO of Amazon<OR>President of Amazon | 1 | 1 | 1 |
Curating Factual Knowledge ground truth
Consider the impact of string matching between your ground truth and LLM responses when curating ground truth for Factual Knowledge. Best practices for curation in consideration of string matching are the following:
- Use a minimal version of the QA Accuracy ground truth for a factual ground truth containing the most important facts – Because the Factual Knowledge metric uses exact string matching, curating minimal ground truth facts distinct from the QA Accuracy ground truth is imperative. Using QA Accuracy ground truth will not yield a string match unless the response is identical to the ground truth. Apply logical operators as is best suited to represent your facts.
- Zero factual knowledge scores across the benchmark can indicate a poorly formed golden question-answer-fact triplet – If a golden question doesn’t contain an obvious singular answer, or can be equivalently interpreted multiple ways, reframe the golden question or answer to be specific. In the Factual Knowledge table, a question such as “What was a key challenge faced by Amazon’s business in the second quarter of 2023?” can be subjective, and interpreted with multiple possible acceptable answers. Factual Knowledge scores were 0.0 for all entries because each LLM interpreted a unique answer. A better question would be: “How much did foreign exchange rates reduce Amazon’s International segment net sales?” Similarly, “Where is Amazon’s principal office located?” renders multiple acceptable answers, such as “Seattle,” “Seattle, Washington,” or the street address. The question could be reframed as “What is the street address of Amazon’s principal office?” if this is the desired response.
- Generate many variations of fact representation in terms of units and punctuation – Different LLMs will use different language to present facts (date formats, engineering units, financial units, and so on). The factual ground truth should accommodate such expected units for the LLMs being evaluated as part of the pipeline. Experimenting with LLMs to automate fact generation from QA ground truth using LLMs can help.
- Avoid false positive matches – Avoid curating ground truth facts that are overly simple. Short, unpunctuated number sequences, for example, can be matched with years, dates, or phone numbers and can generate false positives.
Understanding QA Accuracy metric calculation
We use the following question answer pair to demonstrate how FMEval metrics are calculated, and how this informs best practices in QA ground truth curation.
Golden Question | “How many shares of common stock were outstanding as of July 21, 2023?” |
Golden Answer | “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” |
In calculating QA Accuracy metrics, first the responses and ground truth are first normalized (lowercase, remove punctuation, remove articles, remove excess whitespace). Then, true positive, false positives, and false negative matches are computed between the LLM response and the ground truth. QA Accuracy metrics returned by FMEval include recall, precision, F1. By assessing exact matching, the Exact Match and Quasi-Exact Match metrics are returned. A detailed walkthrough of the calculation and scores are shown in the following tables.
The first table illustrates the accuracy metric calculation mechanism.
Metric | Definition | Example | Score |
True Positive (TP) | The number of words in the model output that are also contained in the ground truth. |
Golden Answer: “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” Example Response: “Based on the documents provided, Amazon had 10,317,750,796 shares of common stock outstanding as of July 21, 2023.” |
11 |
False Positive (FP) | The number of words in the model output that are not contained in the ground truth. |
Golden Answer: “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” Example Response: “Based on the documents provided, Amazon had 10,317,750,796 shares of common stock outstanding as of July 21, 2023.” |
7 |
False Negative (FN) | The number of words that are missing from the model output, but are included in the ground truth. |
Golden Answer: “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” Example Response: “Based on the documents provided, Amazon had 10,317,750,796 shares of common stock outstanding as of July 21, 2023.” |
3 |
The following table lists the accuracy scores.
Metric | Score | Calculation Approach |
Recall Over Words | 0.786 | |
Precision Over Words | 0.611 | |
F1 | 0.688 | |
Exact Match | 0.0 | (Non-normalized) Binary score that indicates whether the model output is an exact match for the ground truth answer. |
Quasi-Exact Match | 0.0 | (Normalized) Binary score that indicates whether the model output is an exact match for the ground truth answer. |
Interpreting QA Accuracy scores
The following are best practices for interpreting QA accuracy scores:
- Interpret recall as closeness to ground truth – The recall metric in FMEval measures the fraction of ground truth words that are in the model response. With this, we can interpret recall as closeness to ground truth.
- The higher the recall score, the more ground truth is included in the model response. If the entire ground truth is included in the model response, recall will be perfect (1.0), and if no ground truth is included in the model, response recall will be zero (0.0).
- Low recall in response to a golden question can indicate a problem with information retrieval, as shown in the example in the following table. A high recall score, however, doesn’t unilaterally indicate a correct response. Hallucinations of facts can present as a single deviated word between model response and ground truth, while still yielding a high true positive rate in word matching. For such cases, you can complement QA Accuracy scores with Factual Knowledge assessments of golden questions in FMEval (we provide examples later in this post).
Interpretation | Question | Curated Ground Truth | High Closeness to Ground Truth | Low Closeness to Ground Truth | ||
Interpreting Closeness to Ground Truth Scores | “How many shares of common stock were outstanding as of July 21, 2023?” | “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” | “As of July 21, 2023, there were 10,317,750,796 shares of common stock outstanding.” | 0.923 | “Sorry, I do not have access to documents containing common stock information about Amazon.” | 0.111 |
- Interpret precision as conciseness to ground truth – The higher the score, the closer the LLM response is to the ground truth in terms of conveying ground truth information in the fewest number of words. By this definition, we recommend interpreting precision scores as a measure of conciseness to the ground truth. The following table demonstrates LLM responses that show high conciseness to the ground truth and low conciseness. Both answers are factually correct, but the reduction in precision is derived from the higher verbosity of the LLM response relative to the ground truth.
Interpretation | Question | Curated Ground Truth | High Conciseness to Ground Truth | Low Conciseness to Ground Truth | ||
Interpreting Conciseness to Ground Truth | “How many shares of common stock were outstanding as of July 21, 2023?” | “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” | As of July 21, 2023, there were 10,317,750,796 shares of common stock outstanding. | 1.0 |
“Based on the documents provided, Amazon had 10,317,750,796 shares of common stock outstanding as of July 21, 2023. Specifically, in the first excerpt from the quarterly report for the quarter ending June 30, 2023, it states: ‘10,317,750,796 shares of common stock, par value $0.01 per share, outstanding as of July 21, 2023’ Therefore, the number of shares of Amazon common stock outstanding as of July 21, 2023 was 10,317,750,796 according to this statement.” |
0.238 |
- Interpret F1 score as combined closeness and conciseness to ground truth – F1 score is the harmonic mean of precision and recall, and so represents a joint measure that equally weights closeness and conciseness for a holistic score. The highest-scoring responses will contain all the words and remain similarly concise as the curated ground truth. The lowest-scoring responses will differ in verbosity relative to the ground truth and contain a large number of words that are not present in the ground truth. Due to the intermixing of these four qualities, F1 score interpretation is subjective. Reviewing recall and precision independently will clearly indicate the qualities of the generative responses in terms of closeness and conciseness. Some examples of high and low F1 scores are provided in the following table.
Interpretation | Question | Curated Ground Truth | High Combined Closeness x Conciseness | Low Combined Closeness x Conciseness | ||
Interpreting Closeness and Conciseness to Ground Truth | “How many shares of common stock were outstanding as of July 21, 2023?” | “There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023.” | “As of July 21, 2023, there were 10,317,750,796 shares of common stock outstanding.” | 0.96 |
“Based on the documents provided, Amazon had 10,317,750,796 shares of common stock outstanding as of July 21, 2023. Specifically, in the first excerpt from the quarterly report for the quarter ending June 30, 2023, it states: ‘10,317,750,796 shares of common stock, par value $0.01 per share, outstanding as of July 21, 2023’ Therefore, the number of shares of Amazon common stock outstanding as of July 21, 2023 was 10,317,750,796 according to this statement.” |
0.364 |
- Combine factual knowledge with recall for detection of hallucinated facts and false fact matches – Factual Knowledge scores can be interpreted in combination with recall metrics to distinguish likely hallucinations and false positive facts. For example, the following cases can be caught, with examples in the following table:
- High recall with zero factual knowledge suggests a hallucinated fact.
- Zero recall with positive factual knowledge suggests an accidental match between the factual ground truth and an unrelated entity such as a document ID, phone number, or date.
- Low recall and zero factual knowledge may also suggest a correct answer that has been expressed with alternative language to the QA ground truth. Improved ground truth curation (increased question specificity, more ground truth fact variants) can remediate this problem. The BERTScore can also provide semantic context on match quality.
Interpretation | QA Ground Truth | Factual Ground Truth | Factual Knowledge | Recall Score | LLM response |
Hallucination detection | Amazon’s total net sales for the second quarter of 2023 were $134.4 billion. | 134.4 billion<OR>134,383 million | 0 | 0.92 | Amazon’s total net sales for the second quarter of 2023 were $170.0 billion. |
Detect false positive facts | There were 10,317,750,796 shares of Amazon’s common stock outstanding as of July 21, 2023. |
10317750796<OR> 10,317,750,796 |
1.0 | 0.0 | Document ID: 10317750796 |
Correct answer, expressed in different words to ground truth question-answer-fact | Amazon’s principal office is located at 410 Terry Avenue North, Seattle, Washington 98109-5210. | 410 Terry Avenue North | 0 | 0.54 | Amazon’s principal office is located in Seattle, Washington. |
Curating QA Accuracy ground truth
Consider the impact of true positive, false positive, and false negative matches between your golden answer and LLM responses when curating your ground truth for QA Accuracy. Best practices for curation in consideration of string matching are as follows:
- Use LLMs to generate initial golden questions and answers – This is beneficial in terms of speed and level of effort; however, outputs must be reviewed and further curated if necessary before acceptance (see Step 3 of the ground truth experimentation flywheel earlier in this post). Furthermore, applying an LLM to generate your ground truth may bias correct answers towards that LLM, for example, due to string matching of filler words that the LLM commonly uses in its language expression that other LLMs may not. Keeping ground truth expressed in an LLM-agnostic manner is a gold standard.
- Human review golden answers for proximity to desired output – Your golden answers should reflect your standard for the user-facing assistant in terms of factual content and verbiage. Consider the desired level of verbosity and choice of words you expect as outputs based on your production RAG prompt template. Overly verbose ground truths, and ground truths that adopt language unlikely to be in the model output, will increase false negative scores unnecessarily. Human curation of generated golden answers should reflect the desired verbosity and word choice in addition to accuracy of information, before accepting LLM generated golden answers, to make sure evaluation metrics are computed relative to a true golden standard. Apply guardrails on the verbosity of ground truth, such as controlling word count, as part of the generation process.
- Compare LLM accuracy using recall – Closeness to ground truth is the best indicator of word agreement between the model response and the ground truth. When golden answers are curated properly, a low recall suggests strong deviation between the ground truth and the model response, whereas a high recall suggests strong agreement.
- Compare verbosity using precision – When golden answers are curated properly, verbose LLM responses decrease precision scores due to false positives present, and concise LLM responses are rewarded by high precision scores. If the golden answer is highly verbose, however, concise model responses will incur false negatives.
- Experiment to determine recall acceptability thresholds for generative AI pipelines – A recall threshold for the golden dataset can be set to determine cutoffs for pipeline quality acceptability.
- Interpret QA accuracy metrics in conjunction with other metrics to pass judgement on accuracy – Metrics such as Factual Knowledge can be combined with QA Accuracy scores to judge factual knowledge in addition to ground truth word matching.
Key takeaways
Curating appropriate ground truth and interpreting evaluation metrics in a feedback loop is crucial for effective business decision-making when deploying generative AI pipelines for question answering.
There were several key takeaways from this experiment:
- Ground truth curation and metric interpretation are a cyclical process – Understanding how the metrics are calculated should inform the ground truth curation approach to achieve the desired comparison.
- Low-scoring evaluations can indicate problems with ground truth curation in addition to generative AI pipeline quality – Using golden datasets that don’t reflect true answer quality (misleading questions, incorrect answers, ground truth answers don’t reflect expected response style) can be the root cause of poor evaluation results for a successful pipeline. When golden dataset curation is in place, low-scoring evaluations will correctly flag pipeline problems.
- Balance recall, precision, and F1 scores – Find the balance between acceptable recall (closeness to ground truth), precision (conciseness to ground truth), and F1 scores (combined) through iterative experimentation and data curation. Pay close attention to what scores quantify your ideal closeness to ground truth and conciseness to the ground truth based on your data and business objectives.
- Design ground truth verbosity to the level desired in your user experience – For QA Accuracy evaluation, curate ground truth answers that reflect the desired level of conciseness and word choice expected from the production assistant. Overly verbose or unnaturally worded ground truths can unnecessarily decrease precision scores.
- Use recall and factual knowledge for setting accuracy thresholds – Interpret recall in conjunction with factual knowledge to assess overall accuracy, and establish thresholds by experimentation on your own datasets. Factual knowledge scores can complement recall to detect hallucinations (high recall, false factual knowledge) and accidental fact matches (zero recall, true factual knowledge).
- Curate distinct QA and factual ground truths – For a Factual Knowledge evaluation, curate minimal ground truth facts distinct from the QA Accuracy ground truth. Generate comprehensive variations of fact representations in terms of units, punctuation, and formats.
- Golden questions should be unambiguous – Zero factual knowledge scores across the benchmark can indicate poorly formed golden question-answer-fact triplets. Reframe subjective or ambiguous questions to have a specific, singular acceptable answer.
- Automate, but verify, with LLMs – Use LLMs to generate initial ground truth answers and facts, with a human review and curation to align with the desired assistant output standards. Recognize that applying an LLM to generate your ground truth may bias correct answers towards that LLM during evaluation due to matching filler words, and strive to keep ground truth language LLM-agnostic.
Conclusion
In this post, we outlined best practices for ground truth curation and metric interpretation when evaluating generative AI question answering using FMEval. We demonstrated how to curate ground truth question-answer-fact triplets in consideration of the Factual Knowledge and QA Accuracy metrics calculated by FMEval. To validate our approach, we curated a golden dataset of 10 question-answer-fact triplets from Amazon’s Q2 2023 10Q report. We generated responses from three anonymized generative AI pipelines and calculated QA Accuracy and Factual Knowledge metrics.
Our primary findings emphasize that ground truth curation and metric interpretation are tightly coupled. Ground truth should be curated with the measurement approach in mind, and metrics can update the ground truth during golden dataset development. We further recommend curating separate ground truths for QA accuracy and factual knowledge, particularly emphasizing setting a desired level of verbosity according to user experience goals, and setting golden questions with unambiguous interpretations. Closeness and conciseness to ground truth are valid interpretations of FMEval recall and precision metrics, and factual knowledge scores can be used to detect hallucinations. Ultimately, the quantification of the expected user experience in the form of a golden dataset for pipeline evaluation with FMEval supports business decision-making, such as choosing between pipeline options, projecting quality changes from development to production, and adhering to legal and compliance requirements.
Whether you are building an internal application, a customer-facing virtual assistant, or exploring the potential of generative AI for your business, this post can help you use FMEval to make sure your projects meet the highest standards of quality and responsibility. We encourage you to adopt these best practices and start evaluating your generative AI question answering pipelines with the FMEval toolkit today.
About the Authors
Samantha Stuart is a Data Scientist with AWS Professional Services, and has delivered for customers across generative AI, MLOps, and ETL engagements. Samantha has a research master’s degree in engineering from the University of Toronto, where she authored several publications on data-centric AI for drug delivery system design. Outside of work, she is most likely spotted playing music, spending time with friends and family, at the yoga studio, or exploring Toronto.
Rahul Jani is a Data Architect with AWS Professional Services. He collaborates closely with enterprise customers building modern data platforms, generative AI applications, and MLOps. He is specialized in the design and implementation of big data and analytical applications on the AWS platform. Beyond work, he values quality time with family and embraces opportunities for travel.
Ivan Cui is a Data Science Lead with AWS Professional Services, where he helps customers build and deploy solutions using ML and generative AI on AWS. He has worked with customers across diverse industries, including software, finance, pharmaceutical, healthcare, IoT, and entertainment and media. In his free time, he enjoys reading, spending time with his family, and traveling.
Andrei Ivanovic is a Data Scientist with AWS Professional Services, with experience delivering internal and external solutions in generative AI, AI/ML, time series forecasting, and geospatial data science. Andrei has a Master’s in CS from the University of Toronto, where he was a researcher at the intersection of deep learning, robotics, and autonomous driving. Outside of work, he enjoys literature, film, strength training, and spending time with loved ones.
How AI Is Personalizing Customer Service Experiences Across Industries
Customer service departments across industries are facing increased call volumes, high customer service agent turnover, talent shortages and shifting customer expectations.
Customers expect both self-help options and real-time, person-to-person support. These expectations for seamless, personalized experiences extend across digital communication channels, including live chat, text and social media.
Despite the rise of digital channels, many consumers still prefer picking up the phone for support, placing strain on call centers. As companies strive to enhance the quality of customer interactions, operational efficiency and costs remain a significant concern.
To address these challenges, businesses are deploying AI-powered customer service software to boost agent productivity, automate customer interactions and harvest insights to optimize operations.
In nearly every industry, AI systems can help improve service delivery and customer satisfaction. Retailers are using conversational AI to help manage omnichannel customer requests, telecommunications providers are enhancing network troubleshooting, financial institutions are automating routine banking tasks, and healthcare facilities are expanding their capacity for patient care.
What Are the Benefits of AI for Customer Service?
With strategic deployment of AI, enterprises can transform customer interactions through intuitive problem-solving to build greater operational efficiencies and elevate customer satisfaction.
By harnessing customer data from support interactions, documented FAQs and other enterprise resources, businesses can develop AI tools that tap into their organization’s unique collective knowledge and experiences to deliver personalized service, product recommendations and proactive support.
Customizable, open-source generative AI technologies such as large language models (LLMs), combined with natural language processing (NLP) and retrieval-augmented generation (RAG), are helping industries accelerate the rollout of use-case-specific customer service AI. According to McKinsey, over 80% of customer care executives are already investing in AI or planning to do so soon.
With cost-efficient, customized AI solutions, businesses are automating management of help-desk support tickets, creating more effective self-service tools and supporting their customer service agents with AI assistants. This can significantly reduce operational costs and improve the customer experience.
Developing Effective Customer Service AI
For satisfactory, real-time interactions, AI-powered customer service software must return accurate, fast and relevant responses. Some tricks of the trade include:
Open-source foundation models can fast-track AI development. Developers can flexibly adapt and enhance these pretrained machine learning models, and enterprises can use them to launch AI projects without the high costs of building models from scratch.
RAG frameworks connect foundation or general-purpose LLMs to proprietary knowledge bases and data sources, including inventory management and customer relationship management systems and customer service protocols. Integrating RAG into conversational chatbots, AI assistants and copilots tailors responses to the context of customer queries.
Human-in-the-loop processes remain crucial to both AI training and live deployments. After initial training of foundation models or LLMs, human reviewers should judge the AI’s responses and provide corrective feedback. This helps to guard against issues such as hallucination — where the model generates false or misleading information, and other errors including toxicity or off-topic responses. This type of human involvement ensures fairness, accuracy and security is fully considered during AI development.
Human participation is even more important for AI in production. When an AI is unable to adequately resolve a customer question, the program must be able to route the call to customer support teams. This collaborative approach between AI and human agents ensures that customer engagement is efficient and empathetic.
What’s the ROI of Customer Service AI?
The return on investment of customer service AI should be measured primarily based on efficiency gains and cost reductions. To quantify ROI, businesses can measure key indicators such as reduced response times, decreased operational costs of contact centers, improved customer satisfaction scores and revenue growth resulting from AI-enhanced services.
For instance, the cost of implementing an AI chatbot using open-source models can be compared with the expenses incurred by routing customer inquiries through traditional call centers. Establishing this baseline helps assess the financial impact of AI deployments on customer service operations.
To solidify understanding of ROI before scaling AI deployments, companies can consider a pilot period. For example, by redirecting 20% of call center traffic to AI solutions for one or two quarters and closely monitoring the outcomes, businesses can obtain concrete data on performance improvements and cost savings. This approach helps prove ROI and informs decisions for further investment.
Businesses across industries are using AI for customer service and measuring their success:
Retailers Reduce Call Center Load
Modern shoppers expect smooth, personalized and efficient shopping experiences, whether in store or on an e-commerce site. Customers of all generations continue prioritizing live human support, while also desiring the option to use different channels. But complex customer issues coming from a diverse customer base can make it difficult for support agents to quickly comprehend and resolve incoming requests.
To address these challenges, many retailers are turning to conversational AI and AI-based call routing. According to NVIDIA’s 2024 State of AI in Retail and CPG report, nearly 70% of retailers believe that AI has already boosted their annual revenue.
CP All, Thailand’s sole licensed operator for 7-Eleven convenience stores, has implemented conversational AI chatbots in its call centers, which rack up more than 250,000 calls per day. Training the bots presented unique challenges due to the complexities of the Thai language, which includes 21 consonants, 18 pure vowels, three diphthongs and five tones.
To manage this, CP All used NVIDIA NeMo, a framework designed for building, training and fine-tuning GPU-accelerated speech and natural language understanding models. With automatic speech recognition and NLP models powered by NVIDIA technologies, CP All’s chatbot achieved a 97% accuracy rate in understanding spoken Thai.
With the conversational chatbot handling a significant number of customer conversations, the call load on human agents was reduced by 60%. This allowed customer service teams to focus on more complex tasks. The chatbot also helped reduce wait times and provided quicker, more accurate responses, leading to higher customer satisfaction levels.
With AI-powered support experiences, retailers can enhance customer retention, strengthen brand loyalty and boost sales.
Telecommunications Providers Automate Network Troubleshooting
Telecommunications providers are challenged to address complex network issues while adhering to service-level agreements with end customers for network uptime. Maintaining network performance requires rapid troubleshooting of network devices, pinpointing root causes and resolving difficulties at network operations centers.
With its abilities to analyze vast amounts of data, troubleshoot network problems autonomously and execute numerous tasks simultaneously, generative AI is ideal for network operations centers. According to an IDC survey, 73% of global telcos have prioritized AI and machine learning investments for operational support as their top transformation initiative, underscoring the industry’s shift toward AI and advanced technologies.
Infosys, a leader in next-generation digital services and consulting, has built AI-driven solutions to help its telco partners overcome customer service challenges. Using NVIDIA NIM microservices and RAG, Infosys developed an AI chatbot to support network troubleshooting.
By offering quick access to essential, vendor-agnostic router commands for diagnostics and monitoring, the generative AI-powered chatbot significantly reduces network resolution times, enhancing overall customer support experiences.
To ensure accuracy and contextual responses, Infosys trained the generative AI solution on telecom device-specific manuals, training documents and troubleshooting guides. Using NVIDIA NeMo Retriever to query enterprise data, Infosys achieved 90% accuracy for its LLM output. By fine-tuning and deploying models with NVIDIA technologies, Infosys achieved a latency of 0.9 seconds, a 61% reduction compared with its baseline model. The RAG-enabled chatbot powered by NeMo Retriever also attained 92% accuracy, compared with the baseline model’s 85%.
With AI tools supporting network administrators, IT teams and customer service agents, telecom providers can more efficiently identify and resolve network issues.
Financial Services Institutions Pinpoint Fraud With Ease
While customers expect anytime, anywhere banking and support, financial services require a heightened level of data sensitivity. And unlike other industries that may include one-off purchases, banking is typically based on ongoing transactions and long-term customer relationships.
At the same time, user loyalty can be fleeting, with up to 80% of banking customers willing to switch institutions for a better experience. Financial institutions must continuously improve their support experiences and update their analyses of customer needs and preferences.
Many banks are turning to AI virtual assistants that can interact directly with customers to manage inquiries, execute transactions and escalate complex issues to human customer support agents. According to NVIDIA’s 2024 State of AI in Financial Services report, more than one-fourth of survey respondents are using AI to enhance customer experiences, and 34% are exploring the use of generative AI and LLMs for customer experience and engagement.
Bunq, a European digital bank with more than 2 million customers and 8 billion euros worth of deposits, is deploying generative AI to meet user needs. With proprietary LLMs, the company built Finn, a personal AI assistant available to both customers and bank employees. Finn can answer finance-related inquiries such as “How much did I spend on groceries last month?” or “What is the name of the Indian restaurant I ate at last week?”
Plus, with a human-in-the-loop process, Finn helps employees more quickly identify fraud. By collecting and analyzing data for compliance officers to review, bunq now identifies fraud in just three to seven minutes, down from 30 minutes without Finn.
By deploying AI tools that can use data to protect customer transactions, execute banking requests and act on customer feedback, financial institutions can serve customers at a higher level, building the trust and satisfaction necessary for long-term relationships.
Healthcare and Life Sciences Organizations Overcome Staffing Shortages
In healthcare, patients need quick access to medical expertise, precise and tailored treatment options, and empathetic interactions with healthcare professionals. But with the World Health Organization estimating a 10 million personnel shortage by 2030, access to quality care could be jeopardized.
AI-powered digital healthcare assistants are helping medical institutions do more with less. With LLMs trained on specialized medical corpuses, AI copilots can save physicians and nurses hours of daily work by helping with clinical note-taking, automating order-placing for prescriptions and lab tests, and following up with after-visit patient notes.
Multimodal AI that combines language and vision models can make healthcare settings safer by extracting insights and providing summaries of image data for patient monitoring. For example, such technology can alert staff of patient fall risks and other patient room hazards.
To support healthcare professionals, Hippocratic AI has trained a generative AI healthcare agent to perform low-risk, non-diagnostic routine tasks, like reminding patients of necessary appointment prep and following up after visits to make sure medication routines are being followed and no adverse side effects are being experienced.
Hippocratic AI trained its models on evidence-based medicine and completed rigorous testing with a large group of certified nurses and doctors. The constellation architecture of the solution comprises 20 models, one of which communicates with patients while the other 19 supervise its output. The complete system contains 1.7 trillion parameters.
The possibility of every doctor and patient having their own AI-powered digital healthcare assistant means reduced clinician burnout and higher-quality medical care.
Raising the Bar for Customer Experiences With AI
By integrating AI into customer service interactions, businesses can offer more personalized, efficient and prompt service, setting new standards for omnichannel support experiences across platforms. With AI virtual assistants that process vast amounts of data in seconds, enterprises can equip their support agents to deliver tailored responses to the complex needs of a diverse customer base.
To develop and deploy effective customer service AI, businesses can fine-tune AI models and deploy RAG solutions to meet diverse and specific needs.
NVIDIA offers a suite of tools and technologies to help enterprises get started with customer service AI.
NVIDIA NIM microservices, part of the NVIDIA AI Enterprise software platform, accelerate generative AI deployment and support various optimized AI models for seamless, scalable inference. NVIDIA NIM Agent Blueprints provide developers with packaged reference examples to build innovative solutions for customer service applications.
By taking advantage of AI development tools, enterprises can build accurate and high-speed AI applications to transform employee and customer experiences.
Learn more about improving customer service with generative AI.
Build powerful RAG pipelines with LlamaIndex and Amazon Bedrock
This post was co-written with Jerry Liu from LlamaIndex.
Retrieval Augmented Generation (RAG) has emerged as a powerful technique for enhancing the capabilities of large language models (LLMs). By combining the vast knowledge stored in external data sources with the generative power of LLMs, RAG enables you to tackle complex tasks that require both knowledge and creativity. Today, RAG techniques are used in every enterprise, small and large, where generative artificial intelligence (AI) is used as an enabler for solving document-based question answering and other types of analysis.
Although building a simple RAG system is straightforward, building production RAG systems using advanced patterns is challenging. A production RAG pipeline typically operates over a larger data volume and larger data complexity, and must meet a higher quality bar compared to building a proof of concept. A general broad challenge that developers face is low response quality; the RAG pipeline is not able to sufficiently answer a large number of questions. This can be due to a variety of reasons; the following are some of the most common:
- Bad retrievals – The relevant context needed to answer the question is missing.
- Incomplete responses – The relevant context is partially there but not completely. The generated output doesn’t fully answer the input question.
- Hallucinations – The relevant context is there but the model is not able to extract the relevant information in order to answer the question.
This necessitates more advanced RAG techniques on the query understanding, retrieval, and generation components in order to handle these failure modes.
This is where LlamaIndex comes in. LlamaIndex is an open source library with both simple and advanced techniques that enables developers to build production RAG pipelines. It provides a flexible and modular framework for building and querying document indexes, integrating with various LLMs, and implementing advanced RAG patterns.
Amazon Bedrock is a managed service providing access to high-performing foundation models (FMs) from leading AI providers through a unified API. It offers a wide range of large models to choose from, along with capabilities to securely build and customize generative AI applications. Key advanced features include model customization with fine-tuning and continued pre-training using your own data, as well as RAG to augment model outputs by retrieving context from configured knowledge bases containing your private data sources. You can also create intelligent agents that orchestrate FMs with enterprise systems and data. Other enterprise capabilities include provisioned throughput for guaranteed low-latency inference at scale, model evaluation to compare performance, and AI guardrails to implement safeguards. Amazon Bedrock abstracts away infrastructure management through a fully managed, serverless experience.
In this post, we explore how to use LlamaIndex to build advanced RAG pipelines with Amazon Bedrock. We discuss how to set up the following:
- Simple RAG pipeline – Set up a RAG pipeline in LlamaIndex with Amazon Bedrock models and top-k vector search
- Router query – Add an automated router that can dynamically do semantic search (top-k) or summarization over data
- Sub-question query – Add a query decomposition layer that can decompose complex queries into multiple simpler ones, and run them with the relevant tools
- Agentic RAG – Build a stateful agent that can do the preceding components (tool use, query decomposition), but also maintain state-like conversation history and reasoning over time
Simple RAG pipeline
At its core, RAG involves retrieving relevant information from external data sources and using it to augment the prompts fed to an LLM. This allows the LLM to generate responses that are grounded in factual knowledge and tailored to the specific query.
For RAG workflows in Amazon Bedrock, documents from configured knowledge bases go through preprocessing, where they are split into chunks, embedded into vectors, and indexed in a vector database. This allows efficient retrieval of relevant information at runtime. When a user query comes in, the same embedding model is used to convert the query text into a vector representation. This query vector is compared against the indexed document vectors to identify the most semantically similar chunks from the knowledge base. The retrieved chunks provide additional context related to the user’s query. This contextual information is appended to the original user prompt before being passed to the FM to generate a response. By augmenting the prompt with relevant data pulled from the knowledge base, the model’s output is able to use and be informed by an organization’s proprietary information sources. This RAG process can also be orchestrated by agents, which use the FM to determine when to query the knowledge base and how to incorporate the retrieved context into the workflow.
The following diagram illustrates this workflow.
The following is a simplified example of a RAG pipeline using LlamaIndex:
The pipeline includes the following steps:
- Use the
SimpleDirectoryReader
to load documents from the “data/” - Create a
VectorStoreIndex
from the loaded documents. This type of index converts documents into numerical representations (vectors) that capture their semantic meaning. - Query the index with the question “What is the capital of France?” The index uses similarity measures to identify the documents most relevant to the query.
- The retrieved documents are then used to augment the prompt for the LLM, which generates a response based on the combined information.
LlamaIndex goes beyond simple RAG and enables the implementation of more sophisticated patterns, which we discuss in the following sections.
Router query
RouterQueryEngine
allows you to route queries to different indexes or query engines based on the nature of the query. For example, you could route summarization questions to a summary index and factual questions to a vector store index.
The following is a code snippet from the example notebooks demonstrating RouterQueryEngine:
Sub-question query
SubQuestionQueryEngine
breaks down complex queries into simpler sub-queries and then combines the answers from each sub-query to generate a comprehensive response. This is particularly useful for queries that span across multiple documents. It first breaks down the complex query into sub-questions for each relevant data source, then gathers the intermediate responses and synthesizes a final response that integrates the relevant information from each sub-query. For example, if the original query was “What is the population of the capital city of the country with the highest GDP in Europe,” the engine would first break it down into sub-queries like “What is the highest GDP country in Europe,” “What is the capital city of that country,” and “What is the population of that capital city,” and then combine the answers to those sub-queries into a final comprehensive response.
The following is an example of using SubQuestionQueryEngine
:
Agentic RAG
An agentic approach to RAG uses an LLM to reason about the query and determine which tools (such as indexes or query engines) to use and in what sequence. This allows for a more dynamic and adaptive RAG pipeline. The following architecture diagram shows how agentic RAG works on Amazon Bedrock.
Agentic RAG in Amazon Bedrock combines the capabilities of agents and knowledge bases to enable RAG workflows. Agents act as intelligent orchestrators that can query knowledge bases during their workflow to retrieve relevant information and context to augment the responses generated by the FM.
After the initial preprocessing of the user input, the agent enters an orchestration loop. In this loop, the agent invokes the FM, which generates a rationale outlining the next step the agent should take. One potential step is to query an attached knowledge base to retrieve supplemental context from the indexed documents and data sources.
If a knowledge base query is deemed beneficial, the agent invokes an InvokeModel
call specifically for knowledge base response generation. This fetches relevant document chunks from the knowledge base based on semantic similarity to the current context. These retrieved chunks provide additional information that is included in the prompt sent back to the FM. The model then generates an observation response that is parsed and can invoke further orchestration steps, like invoking external APIs (through action group AWS Lambda functions) or provide a final response to the user. This agentic orchestration augmented by knowledge base retrieval continues until the request is fully handled.
One example of an agent orchestration loop is the ReAct agent, which was initially introduced by Yao et al. ReAct interleaves chain-of-thought and tool use. At every stage, the agent takes in the input task along with the previous conversation history and decides whether to invoke a tool (such as querying a knowledge base) with the appropriate input or not.
The following is an example of using the ReAct agent with the LlamaIndex SDK:
The ReAct agent will analyze the query and decide whether to use the Lyft 10K tool or another tool to answer the question. To try out agentic RAG, refer to the GitHub repo.
LlamaCloud and LlamaParse
LlamaCloud represents a significant advancement in the LlamaIndex landscape, offering a comprehensive suite of managed services tailored for enterprise-grade context augmentation within LLM and RAG applications. This service empowers AI engineers to concentrate on developing core business logic by streamlining the intricate process of data wrangling.
One key component is LlamaParse, a proprietary parsing engine adept at handling complex, semi-structured documents replete with embedded objects like tables and figures, seamlessly integrating with LlamaIndex’s ingestion and retrieval pipelines. Another key component is the Managed Ingestion and Retrieval API, which facilitates effortless loading, processing, and storage of data from diverse sources, including LlamaParse outputs and LlamaHub’s centralized data repository, while accommodating various data storage integrations.
Collectively, these features enable the processing of vast production data volumes, culminating in enhanced response quality and unlocking unprecedented capabilities in context-aware question answering for RAG applications. To learn more about these features, refer to Introducing LlamaCloud and LlamaParse.
For this post, we use LlamaParse to showcase the integration with Amazon Bedrock. LlamaParse is an API created by LlamaIndex to efficiently parse and represent files for efficient retrieval and context augmentation using LlamaIndex frameworks. What is unique about LlamaParse is that it is the world’s first generative AI native document parsing service, which allows users to submit documents along with parsing instructions. The key insight behind parsing instructions is that you know what kind of documents you have, so you already know what kind of output you want. The following figure shows a comparison of parsing a complex PDF with LlamaParse vs. two popular open source PDF parsers.
A green highlight in a cell means that the RAG pipeline correctly returned the cell value as the answer to a question over that cell. A red highlight means that the question was answered incorrectly.
Integrate Amazon Bedrock and LlamaIndex to build an Advanced RAG Pipeline
In this section, we show you how to build an advanced RAG stack combining LlamaParse and LlamaIndex with Amazon Bedrock services – LLMs, embedding models, and Bedrock Knowledge Base.
To use LlamaParse with Amazon Bedrock, you can follow these high-level steps:
- Download your source documents.
- Send the documents to LlamaParse using the Python SDK:
- Wait for the parsing job to finish and upload the resulting Markdown documents to Amazon Simple Storage Service (Amazon S3).
- Create an Amazon Bedrock knowledge base using the source documents.
- Choose your preferred embedding and generation model from Amazon Bedrock using the LlamaIndex SDK:
- Implement an advanced RAG pattern using LlamaIndex. In the following example, we use
SubQuestionQueryEngine
and a retriever specially created for Amazon Bedrock knowledge bases: - Finally, query the index with your question:
We tested Llamaparse on a real-world, challenging example of asking questions about a document containing Bank of America Q3 2023 financial results. An example slide from the full slide deck (48 complex slides!) is shown below.
Using the procedure outlined above, we asked “What is the trend in digital households/relationships from 3Q20 to 3Q23?”; take a look at the answer generated using Llamaindex tools vs. the reference answer from human annotation.
LlamaIndex + LlamaParse answer | Reference answer |
The trend in digital households/relationships shows a steady increase from 3Q20 to 3Q23. In 3Q20, the number of digital households/relationships was 550K, which increased to 645K in 3Q21, then to 672K in 3Q22, and further to 716K in 3Q23. This indicates consistent growth in the adoption of digital services among households and relationships over the reported quarters. | The trend shows a steady increase in digital households/relationships from 645,000 in 3Q20 to 716,000 in 3Q23. The digital adoption percentage also increased from 76% to 83% over the same period. |
The following are example notebooks to try out these steps on your own examples. Note the prerequisite steps and cleanup resources after testing them.
- Ingest with LlamaParse into S3 for KB
- Agentic RAG with Bedrock KB and LlamaIndex SubQuestionQueryEngine
Conclusion
In this post, we explored various advanced RAG patterns with LlamaIndex and Amazon Bedrock. To delve deeper into the capabilities of LlamaIndex and its integration with Amazon Bedrock, check out the following resources:
By combining the power of LlamaIndex and Amazon Bedrock, you can build robust and sophisticated RAG pipelines that unlock the full potential of LLMs for knowledge-intensive tasks.
About the Author
Shreyas Subramanian is a Principal data scientist and helps customers by using Machine Learning to solve their business challenges using the AWS platform. Shreyas has a background in large scale optimization and Machine Learning, and in use of Machine Learning and Reinforcement Learning for accelerating optimization tasks.
Jerry Liu is the co-founder/CEO of LlamaIndex, a data framework for building LLM applications. Before this, he has spent his career at the intersection of ML, research, and startups. He led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG, and worked on recommendation systems at Quora.
Evaluating prompts at scale with Prompt Management and Prompt Flows for Amazon Bedrock
As generative artificial intelligence (AI) continues to revolutionize every industry, the importance of effective prompt optimization through prompt engineering techniques has become key to efficiently balancing the quality of outputs, response time, and costs. Prompt engineering refers to the practice of crafting and optimizing inputs to the models by selecting appropriate words, phrases, sentences, punctuation, and separator characters to effectively use foundation models (FMs) or large language models (LLMs) for a wide variety of applications. A high-quality prompt maximizes the chances of having a good response from the generative AI models.
A fundamental part of the optimization process is the evaluation, and there are multiple elements involved in the evaluation of a generative AI application. Beyond the most common evaluation of FMs, the prompt evaluation is a critical, yet often challenging, aspect of developing high-quality AI-powered solutions. Many organizations struggle to consistently create and effectively evaluate their prompts across their various applications, leading to inconsistent performance and user experiences and undesired responses from the models.
In this post, we demonstrate how to implement an automated prompt evaluation system using Amazon Bedrock so you can streamline your prompt development process and improve the overall quality of your AI-generated content. For this, we use Amazon Bedrock Prompt Management and Amazon Bedrock Prompt Flows to systematically evaluate prompts for your generative AI applications at scale.
The importance of prompt evaluation
Before we explain the technical implementation, let’s briefly discuss why prompt evaluation is crucial. The key aspects to consider when building and optimizing a prompt are typically:
- Quality assurance – Evaluating prompts helps make sure that your AI applications consistently produce high-quality, relevant outputs for the selected model.
- Performance optimization – By identifying and refining effective prompts, you can improve the overall performance of your generative AI models in terms of lower latency and ultimately higher throughput.
- Cost efficiency – Better prompts can lead to more efficient use of AI resources, potentially reducing costs associated with model inference. A good prompt allows for the use of smaller and lower-cost models, which wouldn’t give good results with a bad quality prompt.
- User experience – Improved prompts result in more accurate, personalized, and helpful AI-generated content, enhancing the end user experience in your applications.
Optimizing prompts for these aspects is an iterative process that requires an evaluation for driving the adjustments in the prompts. It is, in other words, a way to understand how good a given prompt and model combination are for achieving the desired answers.
In our example, we implement a method known as LLM-as-a-judge, where an LLM is used for evaluating the prompts based on the answers it produced with a certain model, according to predefined criteria. The evaluation of prompts and their answers for a given LLM is a subjective task by nature, but a systematic prompt evaluation using LLM-as-a-judge allows you to quantify it with an evaluation metric in a numerical score. This helps to standardize and automate the prompting lifecycle in your organization and is one of the reasons why this method is one of the most common approaches for prompt evaluation in the industry.
Let’s explore a sample solution for evaluating prompts with LLM-as-a-judge with Amazon Bedrock. You can also find the complete code example in amazon-bedrock-samples.
Prerequisites
For this example, you need the following:
- An AWS account and a user with an AWS Identity and Access Management (IAM) role authorized to use Amazon Bedrock. For guidance, refer to the instructions in Getting started with Amazon Bedrock. Make sure the role includes the permissions for using Prompt Flows for Amazon Bedrock, as explained in Prerequisites for Prompt flows for Amazon Bedrock.
- Access provided to the models you use for invocation and evaluation. For guidance, follow the instructions in Manage access to Amazon Bedrock foundation models.
Set up the evaluation prompt
To create an evaluation prompt using Amazon Bedrock Prompt Management, follow these steps:
- On the Amazon Bedrock console, in the navigation pane, choose Prompt management and then choose Create prompt.
- Enter a Name for your prompt such as
prompt-evaluator
and a Description such as “Prompt template for evaluating prompt responses with LLM-as-a-judge.” Choose Create.
- In the Prompt field, write your prompt evaluation template. In the example, you can use a template like the following or adjust it according to your specific evaluation requirements.
- Under Configurations, select a model to use for running evaluations with the prompt. In our example we selected Anthropic Claude Sonnet. The quality of the evaluation will depend on the model you select in this step. Make sure you balance the quality, response time, and cost accordingly in your decision.
- Set the Inference parameters for the model. We recommend that you keep Temperature as 0 for making a factual evaluation and to avoid hallucinations.
You can test your evaluation prompt with sample inputs and outputs using the Test variables and Test window panels.
- Now that you have a draft of your prompt, you can also create versions of it. Versions allow you to quickly switch between different configurations for your prompt and update your application with the most appropriate version for your use case. To create a version, choose Create version at the top.
The following screenshot shows the Prompt builder page.
Set up the evaluation flow
Next, you need to build an evaluation flow using Amazon Bedrock Prompt Flows. In our example, we use prompt nodes. For more information on the types of nodes supported, check the Node types in prompt flow documentation. To build an evaluation flow, follow these steps:
- On the Amazon Bedrock console, under Prompt flows, choose Create prompt flow.
- Enter a Name such as
prompt-eval-flow
. Enter a Description such as “Prompt Flow for evaluating prompts with LLM-as-a-judge.” Choose Use an existing service role to select a role from the dropdown. Choose Create. - This will open the Prompt flow builder. Drag two Prompts nodes to the canvas and configure the nodes as per the following parameters:
- Flow input
- Output:
- Name:
document
, Type: String
- Name:
- Output:
- Invoke (Prompts)
- Node name:
Invoke
- Define in node
- Select model: A preferred model to be evaluated with your prompts
- Message:
{{input}}
- Inference configurations: As per your preferences
- Input:
- Name:
input
, Type: String, Expression:$.data
- Name:
- Output:
- Name:
modelCompletion
, Type: String
- Name:
- Node name:
- Evaluate (Prompts)
- Node name:
Evaluate
- Use a prompt from your Prompt Management
- Prompt:
prompt-evaluator
- Version: Version 1 (or your preferred version)
- Select model: Your preferred model to evaluate your prompts with
- Inference configurations: As set in your prompt
- Input:
- Name:
input
, Type: String, Expression:$.data
- Name:
output
, Type: String, Expression:$.data
- Name:
- Output
- Name:
modelCompletion
, Type: String
- Name:
- Node name:
- Flow output
- Node name:
End
- Input:
- Name:
document
, Type: String, Expression:$.data
- Name:
- Node name:
- Flow input
- To connect the nodes, drag the connecting dots, as shown in the following diagram.
- Choose Save.
You can test your prompt evaluation flow by using the Test prompt flow panel. Pass an input, such as the question, “What is cloud computing in a single paragraph?” It should return a JSON with the result of the evaluation similar to the following example. In the code example notebook, amazon-bedrock-samples, we also included the information about the models used for invocation and evaluation to our result JSON.
As the example shows, we asked the FM to evaluate with separate scores the prompt and the answer the FM generated from that prompt. We asked it to provide a justification for the score and some recommendations to further improve the prompts. All this information is valuable for a prompt engineer because it helps guide the optimization experiments and helps them make more informed decisions during the prompt life cycle.
Implementing prompt evaluation at scale
To this point, we’ve explored how to evaluate a single prompt. Often, medium to large organizations work with tens, hundreds, and even thousands of prompt variations for their multiple applications, making it a perfect opportunity for automation at scale. For this, you can run the flow in full datasets of prompts stored in files, as shown in the example notebook.
Alternatively, you can also rely on other node types in Amazon Bedrock Prompt Flows for reading and storing in Amazon Simple Storage Service (Amazon S3) files and implementing iterator and collector based flows. The following diagram shows this type of flow. Once you have established a file-based mechanism for running the prompt evaluation flow on datasets at scale, you can also automate the whole process by connecting it your preferred continuous integration and continuous development (CI/CD) tools. The details for these are out of the scope of this post.
Best practices and recommendations
Based on our evaluation process, here are some best practices for prompt refinement:
- Iterative improvement – Use the evaluation feedback to continuously refine your prompts. The prompt optimization is ultimately an iterative process.
- Context is key – Make sure your prompts provide sufficient context for the AI model to generate accurate responses. Depending on the complexity of the tasks or questions that your prompt will answer, you might need to use different prompt engineering techniques. You can check the Prompt engineering guidelines in the Amazon Bedrock documentation and other resources on the topic provided by the model providers.
- Specificity matters – Be as specific as possible in your prompts and evaluation criteria. Specificity guides the models towards desired outputs.
- Test edge cases – Evaluate your prompts with a variety of inputs to verify robustness. You might also want to run multiple evaluations on the same prompt for comparing and testing output consistency, which might be important depending on your use case.
Conclusion and next steps
By using the LLM-as-a-judge method with Amazon Bedrock Prompt Management and Amazon Bedrock Prompt Flows, you can implement a systematic approach to prompt evaluation and optimization. This not only improves the quality and consistency of your AI-generated content but also streamlines your development process, potentially reducing costs and improving user experiences.
We encourage you to explore these features further and adapt the evaluation process to your specific use cases. As you continue to refine your prompts, you’ll be able to unlock the full potential of generative AI in your applications. To get started, check out the full with the code samples used in this post. We’re excited to see how you’ll use these tools to enhance your AI-powered solutions!
For more information on Amazon Bedrock and its features, visit the Amazon Bedrock documentation.
About the Author
Antonio Rodriguez is a Sr. Generative AI Specialist Solutions Architect at Amazon Web Services. He helps companies of all sizes solve their challenges, embrace innovation, and create new business opportunities with Amazon Bedrock. Apart from work, he loves to spend time with his family and play sports with his friends.
Quantifying images’ “conceptual similarity”
New method leverages vision-language models to formalize a comparison that had previously required human judgment.Read More
Collaborators: Silica in space with Richard Black and Dexter Greene
Transforming research ideas into meaningful impact is no small feat. It often requires the knowledge and experience of individuals from across disciplines and institutions. Collaborators, a Microsoft Research Podcast series, explores the relationships—both expected and unexpected—behind the projects, products, and services being pursued and delivered by researchers at Microsoft and the diverse range of people they’re teaming up with.
Nearly 50 years ago, Voyager 1 and 2 took off for space, each with a record comprising a sampling of earthly sounds and sights. The records’ purpose? To give extraterrestrials a sense of humanity. Thanks to students at Avenues: The World School, the universe might be receiving an update. In this episode, college freshman and Avenues alum Dexter Greene and Microsoft research manager Richard Black talk about how Project Silica, a technology that uses tiny laser pulses to store data in small glass “platters,” is supporting the Avenues Golden Record 2.0 project; what it means for data storage more broadly; and why the students’ efforts are valuable even if the information never gets to its intended recipients.
Learn more:
- Avenues: The World School — Golden Record 2.0 (opens in new tab)
Project homepage - Golden Record: Overview (opens in new tab)
NASA Science - Project Silica
Project homepage - Sealed in glass (opens in new tab)
Microsoft Unlocked innovation story, 2023 - Optics for the cloud: storage in the zettabyte era with Dr. Ant Rowstron and Mark Russinovich
Microsoft Research Podcast, November 2019 - Project Silica proof of concept stores Warner Bros. ‘Superman’ movie on quartz glass (opens in new tab)
Microsoft Source blog, November 2019
Subscribe to the Microsoft Research Podcast:
Transcript
[TEASER] [MUSIC PLAYS UNDER DIALOGUE]DEXTER GREENE: So the original Golden Record is … I like to think of it as, sort of, a time capsule of humanity that was designed to represent us—who we are as a species, what we love, why we love it, what we do, and, sort of, our diversity, why we’re all different, why we do different things—to possible extraterrestrials. And so the Golden Record was produced in 1977 by a relatively small team led by Carl Sagan. What we’re doing, my team, is we’re working on creating an updated Golden Record. And I began researching different storage methods, and I began to realize that we hadn’t made that much headway in storage since then. Of course, we’ve made progress but nothing really spectacular until I found 5D storage. And I noticed that there were only two real places that I could find information about this. One was the University of Southampton, and one was Project Silica at Microsoft. I reached out to the University of Southampton and Dr. Black, and somehow, kind of, to my surprise, Dr. Black actually responded!
RICHARD BLACK: I was in particularly intrigued by the Avenues Golden Record application because I could see it was an application not just where Silica was a better media than what people use today but really where Silica was the only media that would work because none of the standard media really work over the kind of time scales that are involved in space travel, and none of them really work in the harsh environments that are involved in space and outer space and space travel. So in some ways for me, it was an easy way to communicate just what a transformative digital media technology Silica is, and that’s why as an application, it really grabbed my interest.
[TEASER ENDS]
GRETCHEN HUIZINGA: You’re listening to Collaborators, a Microsoft Research Podcast showcasing the range of expertise that goes into transforming mind-blowing ideas into world-changing technologies. I’m Dr. Gretchen Huizinga.
[MUSIC FADES]Today I’m talking to Dr. Richard Black, a senior principal research manager and the research director of Project Silica at Microsoft Research. And with him is Dexter Greene, a rising freshman at the University of Michigan and a recent graduate of Avenues: The World School in New York City. Richard and Dexter are involved in a unique multidisciplinary, multi-institutional, and multigenerational collaboration called Avenues Golden Record, a current effort to communicate with extraterrestrial intelligence. We’ll get into that in a lot more detail shortly, but first, let’s meet our collaborators.
Richard, let’s start with you. As I’ve just noted, you’re a research manager at the Cambridge UK lab of Microsoft Research and the research director of a really cool technology called Silica. In a second, I want you to talk about that more specifically, but right now, tell us about yourself. What’s your background? What are your research interests writ large? And what excites you about the broad remit of your work at Cambridge?
RICHARD BLACK: So my background is a computer scientist. I’ve been at Microsoft Research for 24 years, and before that, I had a faculty position at a university here in the UK. So I also have an interest in education, and it’s been a delight to interact with Dexter and the other students at Avenues. My research interests really cover all aspects of computer systems, which means operating systems, networking, and computer architecture. And the exciting thing for me about being at Microsoft Research is that this is really a period of rapid change with the cloud, digital transformation of society. It gives really a huge motivation to research better underlying technologies for everything that we do. And for me in the last few years, that’s been in archival storage with Project Silica.
HUIZINGA: Hmm. Richard, I’m interested to know a little bit more about your background. Where did you go to school, what led you to this kind of research, and what university were you teaching at?
BLACK: Yeah, I went to university and did my PhD here in Cambridge. I was teaching at the University of Glasgow, which is in Scotland in the UK, and teaching again computer systems, so those operating systems, computer architecture, and computer networking.
HUIZINGA: Well, Dexter, you’re the first student collaborator we’ve featured on this show, which is super fun. Tell us about yourself and about Avenues: The World School, where this particular collaboration was born.
DEXTER GREENE: Thanks for having me. I’m super excited to be here. And like you said, it’s very cool to be the first student collaborator that you featured on the show. So I’m 18. I just graduated high school a few months ago, and I will be attending the University of Michigan’s College of Engineering in the fall. If you know me personally, you know that I love robotics. I competed in the FIRST Tech Challenge all throughout high school. The FIRST Tech Challenge is a student robotics competition. There is the FIRST Tech Challenge, FIRST Robotics Competition, and FIRST LEGO League. So it’s, like, three different levels of robotics competition, which is run all around the world. And every year, there’s, like, a championship at the end to declare a winner. And I plan to major in either robotics or mechanical engineering. So more about Avenues. Avenues is a K-through-12 international immersion school, which is very interesting. So younger students might do a day in Spanish and a day in English or a day in Mandarin and then a day in English, going through all their classes in that language. So I actually attended Avenues since second grade, so when I was younger, I would do a full day in Spanish and then I would switch to a full day in English, doing my courses like math, history, English, all in my language, Spanish for me. And Avenues is a very interesting school and very different in many ways. They like to, sort of, think outside the box. There’s a lot of very unique classes, unique programs. A great example is what they call J-Term, or June and January Term, which is where students will have one course every day for the entire month where they can really dive deep into that subject. And I was actually lucky enough to do the Golden Record for a full month in 11th grade, which I’ll talk about this more, but that’s actually when I first made contact with Dr. Black and found this amazing technology, which is, I guess why we’re all here today.
HUIZINGA: Right.
GREENE: So, yeah, there’s many really cool parts about Avenues. There’s travel programs that you can do where you can go all around the world. You can go between different campuses. There’s online classes that you can take. The list goes on …
HUIZINGA: Well, it’s funny that you say “when I first made contact with Dr. Black” because it sounds like something that you’re working on! So let’s talk about that for a second. So the project we’re talking about today is Avenues Golden Record, but it’s not the first Golden Record to exist. So for those of our listeners who don’t know what Golden Record even is, Dexter, give us a little history lesson and chronicle the story from the original Golden Record way back in 1977 all the way to what you’re doing today with the project.
GREENE: Yeah. So I guess let me start with, what is the Golden Record? So the original Golden Record is … I like to think of it as, sort of, a time capsule of humanity that was designed to represent us—who we are as a species, what we love, why we love it, what we do, and, sort of, our diversity, why we’re all different, why we do different things—to possible extraterrestrials. And so the Golden Record was produced in 1977 by a relatively small team led by Carl Sagan[1], an American astronomer who was a professor at, I believe, Cornell. And so it’s basically a series of meticulously curated content. So that could be images, audios, sounds of nature, music, the list goes on. Really anything you can think of. That’s, sort of, the beauty of it. Anything can go on it. So it’s just a compilation of what we are, who we are, and why we are—what’s important to us. A great example, one of my favorite parts of the Golden Record, is one of the first audios on it is a greeting in 55 languages. It’s, sort of, meant to be, like, a welcome … I guess less of a welcome, but more like a hello because we’re not welcoming anyone to Earth, [LAUGHTER] but it’s, like, a hello, nice to meet you, in 55 languages to show that we’re very diverse, very different. And, yeah, you can actually … if you’re interested and if you’d like to learn more, you can actually go see all the content that’s on the Golden Records. NASA has a webpage for that. I definitely recommend if you have a chance to check it out.
HUIZINGA: Yeah.
GREENE: And I guess moving on to future attempts … so what we’re doing, my team, is we’re working on creating an updated Golden Record. So it’s been 47 years now since the original Golden Record—kind of a long time. And of course a lot’s changed. Some for the better, some for the worse. And we think that it’s about time we update that. Update who we are, what we are, and what we care about, what we love.
HUIZINGA: Right.
GREENE: So our team has begun working on that. One project that I’m familiar with, other than our own, that’s, sort of, a similar attempt is known as Humanity’s Message to the Stars, which is led by Dr. Jonathan Jiang, who is a researcher at NASA’s Jet Propulsion Laboratory.[2] Very cool. That’s the only project that’s similar that I’m aware of, but I’m sure there have been other attempts in the past.
HUIZINGA: Yeah … just to make a note right now, we’re using the term “record,” and the original medium was actually a record, like an LP. But excitingly, we’ll get to why Dr. Black is on the show today [LAUGHS] and talk about the new media. Before we do that, as I was preparing this episode, it began to feel like a story of contrasting couplets, like earthlings and aliens, content and media, veteran researcher and high school student. … So let’s talk about the last pairing for a second, the two of you, and how you got together on this project. It’s a fun story. I like to call this question “how I met your mother.” So how did a high school kid from New York come to be a research collaborator with a seasoned scientist from Cambridge? Dexter, tell your side of the story. It’s cool. And then Richard can fill in the blanks from across the pond!
GREENE: Yeah, so let me actually rewind a little bit further than that, about how I got into the project myself, …
HUIZINGA: Good!
GREENE: … which, I think, is a pretty fun story. So one of my teachers—my design and engineering teacher at the time, Mr. Cavalier—gave a presentation at one of our gradewide assemblies. And the first slide was something along the lines of “the most challenging project in human history,” which immediately caught my eye. I was like, I have to do this! There’s no way I’m not doing this project! [LAUGHTER] And the slides to come of course made me want to partake in the project even more. But that first slide … really, I was sold. It was a done deal! So I applied to the project. I got in. And then we began working and researching, and I’ll talk about this more later, as well, but we, sort of, split up into two teams at the beginning: content and media. Media being the form, or medium, that we send it on. And so that was the team that I was on. And I began researching different storage methods and, sort of, advancements in storage methods since the original Golden Record in 1977. And I began to realize that we hadn’t made that much headway in storage since then. Of course we’ve made progress but nothing really spectacular until I found 5D storage. And I was immediately, just, amazed by the longevity, durability, capacity—so many things. I mean, there’s just so many reasons to be amazed. But … so I began researching and I noticed that there were only two real places that I could find information about this. One was the University of Southampton, I believe, and one was Project Silica at Microsoft. And so I actually reached out to both. I reached out to the University of Southampton and Dr. Black, and somehow, [LAUGHS] kind of, to my surprise, Dr. Black actually responded! And I was, kind of, stunned when he responded because I was like, there’s no way this researcher at Microsoft is going to respond to this high school student that he’s never met in the middle of nowhere. So when Dr. Black did respond, I was just amazed and so excited. And, yeah, it went from there. We began communicating back and forth. And then, I believe, we met once over the following summer, and now we’re here!
HUIZINGA: OK, there’s so many parallels right now between this communication contact and what you’re doing with potential extraterrestrial intelligence. It’s like, I contacted him, he contacted me back, and then we started having a conversation. … Yeah, so, Richard, you were the guy who received the cold email from this high school student. What was your reaction, and how did you get interested in pursuing a relationship in terms of the science of this?
BLACK: Yeah, so let me say I was really intrigued by the Avenues Golden Record application. I do get quite a lot of cold emails, [LAUGHTER] and I try to reply to most of them. I do have a few canned answers because I don’t have time to interact with everybody who reaches out to me. But I was in particularly intrigued by the Avenues Golden Record application because I could see it was an application not just where Silica was a better media than what people use today but really where Silica was the only media that would work because none of the standard media really work over the kind of time scales that are involved in space travel, and none of them really work in the harsh environments that are involved in space and outer space and space travel. So in some ways for me, it was an easy way to communicate just what a transformative digital media technology Silica is, and that’s why as an application it really grabbed my interest.
HUIZINGA: So did you have any idea when the initial exchange happened that this would turn into a full-blown project?
BLACK: I didn’t know how much time Dexter and his fellow students would have to invest in it. So for me, at the beginning, I was just quite happy to answer a few questions that they have, to point them in the right direction, to fill in a few blanks, and things like that. And it was only much later, I think, after perhaps we’d had our first meeting, that I realized that Dexter and his team were actually serious, [LAUGHTER] and they had some time, and they were going to actually invest in this and think it through. And so I was happy to work with them and to continue to answer questions that they had and to work towards actually, you know, writing a couple of Silica platters with the output that they were creating and providing it for them.
HUIZINGA: Well, let’s dig in there. Richard, let’s talk about digital data and the storage mediums that love it. I want to break this into two parts because I’m interested in it from two angles. And the first one is purely technical. I’ll take a second to note that we did an episode on Project Silica way back in 2019. I say way back, like … but in technical years right now, [LAUGHS] that seems like a long time! And on that episode, your colleague Ant Rowstron talked with me and Mark Russinovich, the CTO of Microsoft’s Azure. So we’ll put a link in the show notes for that super-fun, interesting show. But right now, Richard, would you give our listeners an overview of the current science of data on glass? What is Silica? How is it different from other storage media? And what’s changed in the five years since I talked to Ant and Mark?
BLACK: Sure. So Silica is an archival storage technology that stores data inside fused silica glass. And it does that using ultrashort laser pulses that make a permanent, detectable, and yet transparent modification to the glass crystal, so the data ends up as durable as the piece of glass itself.
HUIZINGA: Wow.
BLACK: And being transparent means that we can get hundreds of layers of data inside a block of glass that’s only two millimeters thin, making for really incredibly high densities. And since this new physics was discovered at the University of Southampton in the UK, we’ve been working to tame that, and we’ve improved density, energy over a hundred-fold in the time period that we’ve been working on it, and the speed over ten thousand-fold. And we continue to, in our research, to make Silica better and faster. And, yes, you’re right, five years might seem like quite a long time. A comparison that you might think of here is the history of the hard drive. In the history of the hard drive, there was a point in history at which humans discovered the physical effect of magnetism. And it took us actually quite a long time as a species to go from magnetism to hard drives. In this case, this new physical effect that was discovered at Southampton, this new physical effect, you can think of it a bit like discovering magnetism, and taking it all the way from there to actually a real operating storage system actually takes quite a lot of research and effort and development, and that’s the path that we’ve been on doing that, taming and improving densities and speeds and energies and so on during the years of the project.
HUIZINGA: Well, talk a little bit more about the reading and writing of this medium. What’s involved technically on how you get the data on and how you retrieve it?
BLACK: Yeah, and so interestingly the writing of the data and the reading of the data are actually completely different. So writing the data is done with an ultrashort laser pulse. It’s actually a femtosecond-length pulse, and a femtosecond is one-thousandth of one-millionth of one-millionth of a second. And if you take even quite a small amount of energy and you compress it in time into a pulse that short and then you use a lens to focus it in space into just a tiny point, then the intensity of the light at that point during that pulse is just so mind-bogglingly high that you actually get something called a plasma-induced nano-explosion. [LAUGHTER] And I’m not an appropriate physicist of the right sort by background, but I can tell you that what that does is it really transforms the glass crystal at that point but in a way in which it’s, just, it’s so short—the time pulse is so short—it doesn’t really get to damage the crystal around that point. And that’s what enables the data to be incredibly durable because you’ve made this permanent, detectable, and yet transparent change to the glass crystal.
HUIZINGA: So that’s writing. What about reading?
BLACK: Reading you do with a microscope!
HUIZINGA: Oh, my gosh.
BLACK: So it’s a much more straightforward process. A reader is basically a computer-controlled, high-speed, high-quality microscope. And you focus the microscope at an appropriate depth inside the glass, and then you just photograph it. And you get to, if it’s an appropriate sort of microscope, you get to see the changes that you’ve made to the glass crystal. And then we process those images, in fact, using machine learning neural networks to turn it back into the data that we’d originally put into the glass platter. So reading and writing quite different. And on the reading, we’re just using regular light, so the reading process can’t possibly damage the data that’s been stored inside the glass.
HUIZINGA: I imagine you wouldn’t want to get your eye in the path of a femtosecond laser …
BLACK: Yes, femtosecond lasers are not for use at home! That’s quite true. In fact, your joke comment about the eye is … eye surgery is also actually done with femtosecond lasers. That’s one of the other applications.
HUIZINGA: Oh, OK! So maybe you would!
BLACK: But, yes, no, this is definitely something that, for many reasons, Silica is something that’s related to cloud technology, the writing process. And I think we’ll get back to that perhaps later in our discussion.
HUIZINGA: Yeah, yeah.
BLACK: But, yeah, definitely not something for the home.
HUIZINGA: How powerful is the microscope that you have to use to read this incredibly small written data?
BLACK: It’s fairly straightforward from a power point of view, but it has been engineered to be high-speed, high-quality, and under complete computer control that enables us to move rapidly around the piece of glass to wherever the data is of interest and then image at high speed to get the data back out.
HUIZINGA: Yeah. Well, so as you describe it, these amazingly tiny laser pulses store zettabytes of data. Talk for one second, still technically, about how you find and extract the data. You know, I’ve used this analogy before, but at the end of the movie Indiana Jones, the Ark of the Covenant is stored in an army warehouse. And the camera pulls back and there’s just box after box after crate after crate. … It’s like, you’ll never find it. Once you’ve written and stored the data, how do you go about finding it?
BLACK: So like all storage media, whether it be hard drive, tape, flash that might be in your phone in your pocket, there are standard indexing methods. You know, there’s an addressing system, you know, blocks and sectors and tracks. And, you know, we use all of these, kind of, standard terminology in terms of the way we lay the data out on the glass, and then each piece of glass is uniquely identified, and the glass is stored in the library. And actually, we’ve done some quite interesting work and novel work on the robotics that we use for handling and moving the pieces of glass in Silica. It’s interesting Dexter is talking about being interested in robotics. We’ve done a whole bunch of new interesting robotics in Silica because we wanted the shelving or the library system that we keep the glass on to last as long as the glass. And so we wanted it to be completely passive. And we wanted all of the, kind of, the active components to be in the robotics. So we have these new robots that we call shuttles that can, kind of, climb around the library and retrieve the bits of glass that are needed and take them to a reader whenever reading is needed, and that enables us really to scale out a library to enormous scale over many decades or centuries and to just keep growing a passive, completely passive, library.
HUIZINGA: Yeah, I saw a video of the retrieval and it reminded me of those old-fashioned ladders in libraries where you scoot along and you’re on the wall of books and this is, sort of, like the wall of glass. … So, Richard, part two. Let’s talk about Silica from a practical point of view because apparently not all data is equal, and Silica isn’t for everyone’s data all the time. So who are you making this for generally speaking and why? And did you have aliens on your bingo card when you first started?!
BLACK: So, no, I didn’t have aliens [LAUGHTER] on the bingo card when I first started, definitely not. But as I mentioned, yeah, Project Silica is really about archival data. So that’s data that needs to be kept for many years—or longer—where it’s going to be accessed infrequently, and when you do need to access it, you don’t need it back instantaneously. And there’s actually a huge and increasing amount of data that fits those criteria and growing really very rapidly. Of course it’s not the kind of data that you keep in your pocket, but there is a huge amount of it. A lot of archival records that in the past might have been generated and kept on paper, they’re now, in the modern world, they’re all born digital. And we want to look for a low-cost- and low-environment-footprint way of really keeping it in that digital format for the length of time that it needs to be kept. And so Silica is really for data that’s kept in the cloud, not the pocket or the home or the business. Today most organizations already use the cloud for their digital data to get advantages of cost, sustainability, efficiency, reliability, availability, geographic redundancy, and so on. And Silica is definitely designed for that use case. So archival data in the cloud, data that needs to be kept for a long time period, and there’s huge quantities of it and it’s pouring in every day.
HUIZINGA: So concrete example. Financial data, medical data, I mean, what kinds of verticals or sectors would find this most useful?
BLACK: Yeah, so the financial industry, there’s a lot of regulatory requirements to keep data. Obviously in the healthcare situation, there’s a lot of general record keeping, any archives, museums, and so on that exist today. We see a lot of growth in things like the extractive industries, any kind of mining. You want to keep really good records of what it was that you did to, you know, did underground or did to the earth. The media and entertainment industry is one where they create a lot of content that needs to be kept for long time periods. We see scientific research studies where they measure and accumulate a large quantity of data that they want to keep for future analysis, possibly, you know, use it later in training ML models or just for future analysis. Sometimes that data can’t be reproduced. You know, it represents a measurement of the earth at some point and then, you know, things have changed and it wouldn’t be possible to go back and recapture that data.
HUIZINGA: Right.
BLACK: We see stuff in government and local government. One example is we see some local governments who want, essentially, to create a digital twin of their city. And so when new buildings are being built, they want to keep the blueprints, the photographs of the construction site, all of the data about what was built from floor plans and everything else that would help not only emergency services but just help the city in general to understand what’s in its environment, and they want all of that to be kept while that building exists in their city. So there’s lots and lots and lots of growing data that needs to be kept—sometimes for legal reasons, sometimes for practical reasons—lots of it a really fast-growing tier within the data universe.
HUIZINGA: Yeah. Dexter, let’s go back to you. On the Avenues website, it says the purpose of the Golden Record is to, as you mentioned before, “represent humanity and Earth to potential extraterrestrial beings, encapsulating our existence through a collection of visuals and sounds.” That’s pretty similar to the first Golden Record’s mission. But yours is also different in many ways. So talk about what’s new with this version, not just the medium but how you’re going about putting things together, both conceptually and technically.
GREENE: Yeah. So that’s a great question. I can take it in a million different directions. I’ll start by just saying of course the new technology that Dr. Black is working on is, like, the biggest change, at least in my view, because I like this kind of stuff. [LAUGHTER] But that’s like really the huge thing—durability, longevity, and capacity, capacity being one of the main aspects. We could just fit so much more content than was possible 50 years ago. But there’s a lot more. So on the original Golden Record, they only had weeks to work on the project before it had to be ready to go, to put on the Voyager 1 and 2 spacecrafts. So they had a huge time constraint, which of course we don’t have now. We’ve got as much time as we need. And then … I’ll talk about how we’ve been working on the project. So we split up into two main teams, content and form. Form being media, which I, like I said earlier, is the team that I work on. And our content team has been going through loads of websites and online databases, which is another huge difference. When they created the original Golden Record 50 years ago, they actually had to look through books and, like, photocopy each image they wanted. Of course now we don’t have to do that. We just find them online and drag and drop them into a folder. So there’s that aspect, which makes it so much easier to compile so much content and good-quality content that is ethically sourced. So we can find big databases that are OK with giving us their data. Diversity is another big aspect that we’ve been thinking about. The original Golden Record team didn’t have a lot of time to really focus on diversity and capturing everything, the whole image of what we are, which is something that we’ve really been working on. We’re trying to get a lot of different perspectives and cover really everything there is to cover, which is why we actually have an online submission platform on our website where any random person can take an image of their cat that they like [LAUGHTER] or an image of their house or whatever it may be and they can submit that and it will make its way into the content and actually be part of the Golden Record that we hopefully send to space.
HUIZINGA: Right. So, you know, originally, like you say, there’s a sense of curation that has to happen. I know that originally, they chose not to include war or conflict or anything that might potentially scare or frighten any intelligence that found it, saying, hey, we’re not those people. But I know you’ve had a little bit different thinking about that. Tell us about it.
GREENE: Yeah, so that’s something that we’ve talked about a lot, whether or not we should include good and bad. It’s funny. I actually wrote some of my college essays about that, so I have a lot to say about it. I’ll just give you my point of view, and I think most of my team shares the same point of view. We should really capture who we are with the fullest picture that we can without leaving anything out. One of the main reasons that I feel that way is what might be good to us could be bad to extraterrestrials. So I just don’t think it’s worth it to exclude something if we don’t even know how it’s perceived to someone else.
HUIZINGA: Mm-hmm. So back to the space limitations, are you having to make choices for limiting your data, or are you just sort of saying, let’s put everything on?
GREENE: So on the original Golden Record, of course they really meticulously curated everything that went on the record because there wasn’t that much space.
HUIZINGA: Yeah …
GREENE: So they had to be very careful with what they thought was worth it or not. Now that we have so much space, it seems worth it just to include everything that we can include because maybe they see something that we don’t see from an image.
HUIZINGA: Right.
GREENE: The one thing that we … at the very beginning, during my J-Term in 11th grade, we were actually lucky enough to have Jon Lomberg[3], one of the members of the original team, come in to talk to us a bit. And he gave us a, sort of, a lesson about how to choose images, and he was actually the one that chose a lot of the images for the original record. So it was really insightful. One thing we talked a lot about was, like, shadows. A shadow could be very confusing and, sort of, mess up how they perceive the image, but it also might just be worth including because, why not? We can include it, and maybe they get something … they learn about shadows from it even though it’s confusing. So that’s, sort of, how we have thought about it.
HUIZINGA: Well, that’s an interesting segue, because, Richard, at this point, I usually ask what could possibly go wrong if you got everything right. And there are some things that you think, OK, we don’t know. Even on Earth, we have different opinions about different things. And who knows what any other intelligence might think or see or interpret? But, I want to steer away from that question because when we talked earlier, Richard, I was intrigued by something you said, and I want you to talk about it here. I’ll, kind of, paraphrase, but you basically said, even if there’s no intelligent life outside our planet, this is a worthwhile exercise for us as humans. Why’d you say that?
BLACK: Well, I had two answers to that, one, kind of, one selfish and one altruistic! [LAUGHTER] I talk to a lot of archival data users, and those who are serious about keeping their data for many hundreds of years, they think about the problem in, kind of, three buckets. So one is the keeping of the bits themselves. And of course that’s what we are working on in Project Silica and what Silica is really excellent at. One is the metadata, or index, that records what is stored, where it’s stored, and so on. And that’s really the provenance or the remit of the archivist as curator. And then the third is really ensuring that there’s an understanding of how to read the media that persists to those future generations who’ll want to read it. And this is sometimes called the Rosetta Stone problem, and that isn’t the core expertise of me or my team. But the Golden Record, kind of, proves that it can be solved. You know, obviously, humanity isn’t going to give up on microscopes, but if we can explain to extraterrestrials how they would go about reading a Silica platter, then it should be pretty obvious that we can explain to our human descendants how to do so.
HUIZINGA: Hmmm.
BLACK: The altruistic reason is that I think encouraging humanity to reflect on itself—where we are, the challenges ahead for us as a species here on planet Earth—you know, this is a good time to think those thoughts. And any time capsule—and the Golden Record, you can, kind of, view it a bit like a time capsule—it’s a good time to step back and think those philosophical thoughts.
HUIZINGA: Dexter, do you have any thoughts? I know that Dr. Black has, kind of, taken the lead on that, but I wonder if you’ve given any thought to that yourself.
GREENE: Yeah, we’ve given a lot of thought to that: even if the record doesn’t reach extraterrestrials, is it worth it? Why are we doing this? And we feel the exact same as Dr. Black. It’s so worth it just for us to reflect on where we are and how we can improve what we’ve done in the past and what we can do in the future. It’s a … like Dr. Black said, it’s a great exercise for us to do. And it’s exciting. One of the beautiful parts about this project is that there’s no, like, right or wrong answer. Everyone has a different perspective on it.
HUIZINGA: Yeah …
GREENE: And I think this is a great way to think about that.
HUIZINGA: Yeah. So, Dexter, I always ask my collaborators where their project is on the spectrum from lab to life. But this research is a bit different from some of the other projects we featured. What is the, sort of, remit of your timeline? Is there one for completing the record in any way? Who, if anyone, are you accountable to? And what are your options for getting it up into space once it’s ready to go? Because there is no Voyager just imminently leaving right now, as I understand it. So talk a little bit about the scope from lab to life on this.
GREENE: Yeah. So, like you said, we don’t really have an exact timeline. This is, sort of, one of those projects where we could compile content forever. [LAUGHTER] There’s always more content to get. There’s always more perspectives to include. So I could do this forever. But I think the goal is to try and get all the content and get everything ready within the next couple years. As for who we’re accountable to, we’re, sort of, just accountable to ourselves. The way we’ve been working on this is not really like a club, I wouldn’t say, more just like a passion project that a few students and a few teachers have taken a liking to, I guess. So we’re just accountable to ourselves. We of course, like, we have meetings every week, and my teacher was the one that, like, organized the meetings. So I was, sort of, accountable to my teacher but really just doing it for ourselves.
HUIZINGA: Mm-hmm.
GREENE: As for getting it up into space, we have been talking a bit with the team led by Dr. Jiang. So ideally, in the future, we would collaborate more with them and [LAUGHS] go find our ticket to space on a NASA spaceship! But there are of course other options that we’ve been looking at. There’s a bunch of space agencies all around the world. So we’re not just looking at the United States.
HUIZINGA: Well, there’s also private space exploration companies …
GREENE: Yeah, and there’s also private space like SpaceX and etc. So we’ve thought about all of that, and we’ve been reaching out to other space agencies.
HUIZINGA: I love that “ticket to outer space” metaphor but true because there are constraints on what people can put on, although glass of this size would be pretty light.
GREENE: I feel the same way. You do have to get, like, approved. Like, for the original Golden Record, they had to get everything approved to make it to space. But I would think that it would be pretty reasonable—given the technology is just a piece of glass, essentially, and it’s quite small, the smallest it could be, really—I would think that there wouldn’t be too much trouble with that.
HUIZINGA: So, so … but that does lead to a question, kind of, about then extracting, and you’ve addressed this before by kind of saying, if the intelligence that it gets to is sophisticated enough, they’ll probably have a microscope, but I’m assuming you won’t include a microscope? You just send the glass?
GREENE: Yeah. So on the original record, they actually included a … I’m not sure what it’s called, but the device that you need to …
HUIZINGA: A phonograph?
GREENE: … play a rec … yeah, a phonograph, yes. [LAUGHTER] So they include—sorry! [LAUGHS]—they included a phonograph [cartridge and stylus] on the original Voyagers. And we’ve thought about that. It would probably be too difficult to include an actual microscope, but something that I’ve been working on is instructions on not exactly how to make the microscope that you would need but just to explain, “You’re going to need a microscope, and you’re going to need to play around with it.” One of the assumptions that we’ve made is that they will be curious and advanced. I mean, to actually retrieve the data, they would need to catch a spaceship out of the sky as it flies past them …
HUIZINGA: Right!
GREENE: … which we can’t do at the moment. So we’re assuming that they’re more advanced than us, curious, and would put a lot of time into it. Time and effort.
HUIZINGA: I always find it interesting that we always assume they’re smarter than us or more advanced than us. Maybe they’re not. Maybe it’s The Gods Must Be Crazy, and they find a computer and they start banging it on a rock. Who knows? Richard, setting aside any assumptions that this Golden Record on glass makes it into space and assuming that they could catch it and figure it out, Silica’s main mission is much more terrestrial in nature. And part of that, as I understand it, is informing the next generation of cloud infrastructure. So if you could, talk for a minute about the vision for the future of digital storage, particularly in terms of sustainability, and what role Silica may play in helping huge datacenters on this planet be more efficient and maybe even environmentally friendly.
BLACK: Yes, absolutely. So Microsoft is passionate about improving the sustainability of our operations, including data storage. So today archival data uses tape or hard drives, but those have a lifetime of only a few years, and they need to be continually replaced over the lifetime of the data. And that contributes to the costs both in manufacturing and it contributes to e-waste. And of course, those media also can consume electricity during their lifetime, either keeping them spinning or in the careful air-conditioning that’s required to preserve tape. So the transformative advantage of Silica is really in the durability of the data permanently stored in the glass. And this allows us to move from costs—whatever way you think about cost, either money or energy or a sustainability cost—move from costs that are based on the lifetime of the data to costs that are based on the operations that are done to the data. Because the glass doesn’t really need any cost while it’s just sitting there, while it’s doing nothing. And that’s a standout change in the way we can think about keeping archival data because it moves from, you know, a continual, as it were, monthly cost associated with keeping the thing over and over and over to, yeah, you have to pay to write. If you need to read the data, you have to pay the cost to read the data. But in the meantime, there’s no cost to just keeping it around in case you need it. And that’s a big change. And so actually, analysis suggests that Silica should be about a factor of 10 better for sustainability over archival time periods for archival data.
HUIZINGA: And I would imagine “space” is a good proof of concept for how durable and how long you expect it to be able to last and be retrieved. Well …
BLACK: Absolutely. You know, Dexter mentioned the original Golden Record had to get a, kind of, approval to be considered space-worthy. In fact, the windows on spacecraft that we use today are made of fused silica glass. So the fused silica glass is already considered space-worthy! You know, that’s a problem that’s already solved. And, you know, it is known to be very robust and to survive the rigors of outer space.
HUIZINGA: Yeah, and the large datacenter! Well, Dexter, you’re embarking on the next journey in your life, heading off to university this fall. What are you going to be studying, and how are you going to keep going with Avenues’ Golden Record once you’re at college because you don’t have any teachers or groups or whatever?
GREENE: Yeah, that’s a great question. So, like I said, I plan to major in robotics engineering. That’s still, I guess, like, TBD. I might do mechanical engineering, but I’m definitely leaning more towards robotics. And as for the project, I definitely want to continue work on the project. That’s something I’ve made very clear to my team. Like you said, like, I won’t have a teacher there with me, but one of the teachers that works on the project was my physics teacher last year, and I’ve developed a very good relationship with him. I can say for sure that I’ll continue to stay in touch with him, the rest of the team, and this project, which I’m super excited to be working on. And I think we’re really … we, sort of, got past the big first hump, which was like the, I guess, the hardest part, and I feel like it will be smooth sailing from here!
HUIZINGA: Do you think any self-imposed deadlines will help you close off the process? Because I mean, I could see this going … well, I should ask another question. Are there other students at Avenues, or any place else, that are involved in this that haven’t graduated yet?
GREENE: Yes, there are a few of us. Last year when we were working on the project, there were only a handful of us. So it was me and my best friend, Arthur Wilson, who also graduated. There were three other students. One was a ninth grader, and two were 10th graders. So they’re all still working on the project. And there’s one student from another campus that’s still working very closely on the project. And we’ve actually been working on expanding our team within our community. So at the end of last year, we were working on finding other students that we thought would be a great fit for the project and trying to rope them into it! [LAUGHTER] So we definitely want to continue to work on the project. And to answer your question from before about the deadlines, we like to set, sort of, smaller internal deadlines. That’s something that we’ve gotten very used to. As for a long-term deadline, we haven’t set one yet. It could be helpful to set a long-term deadline because if we don’t, we could just do the project forever.
HUIZINGA: [LAUGHS] Right …
GREENE: We might never end because there’s always more to add. But yeah, we do set smaller internal deadlines, so like get x amount of content done by this time, reach out to x number of space agencies, reach out to x number of whatever.
HUIZINGA: Mm-hmm. Yeah, it feels like there should be some kind of, you know, “enough is enough” for this round.
GREENE: Yeah.
HUIZINGA: Otherwise, you’re the artist who never puts enough paint on the canvas and …
GREENE: I also really like what you said just now with, like, “this round” and “next round.” That’s a very good way to look at it. Like Dr. Black said, he produced two platters for us already towards the end of my last school year. And I think that was a very good, like, first round and a good way to continue doing the project where we work on the project and we get a lot of content done and then we can say, let’s let this be a great first draft or a great second draft for now, and we have that draft ready to go, but we can continue to work on it if we want to.
HUIZINGA: Well, you know the famous computer science tagline “Shipping is a feature.” [LAUGHS] So there’s some element of “let’s get it out there” and then we can do the next iteration of upgrades and launch then.
GREENE: Exactly.
HUIZINGA: Well, Richard, while most people don’t put scientists and rock stars in the same bucket, Dexter isn’t the first young person to admit being a little intimidated—and even starstruck—by an accomplished and well-known researcher, but some students aren’t bold enough to cold email someone like you and ask for words of wisdom. So now that we’ve got you on the show, as we close, perhaps you could voluntarily share some encouraging words or direction to the next generation of students who are interested in making the next generation of technologies. So I’ll let you have the last word.
BLACK: Oh, I have a couple of small things to say. First of all, researchers are just people, too. [LAUGHTER] And, you know, they like others to talk to them occasionally. And usually, they like opportunities to be passionate about their research and to communicate the exciting things that they’re doing. So don’t be put off; it’s quite reasonable to talk. You know, I’m really excited by, you know, the, kind of, the passion and imagination that I see in some of the young people around today, and Dexter and his colleagues are an example of that. You know, advice to them would be, you know, work on a technology that excites you and in particular something that, if you were successful, it would have a big impact on our world and, you know, that should give you a kind of motivation and a path to having impact.
HUIZINGA: Hmm. What you just said reminded me of a Saturday Night Live skit with Christopher Walken—it’s the “More Cowbell” skit—but he says, we’re just like other people; we put our pants on one leg at a time, but once our pants are on, we make gold records! I think that’s funny right there!
[MUSIC]Richard and Dexter, thank you so much for coming on and sharing this project with us today on Collaborators. Really had fun!
GREENE: Yeah, thank you so much for having us.
BLACK: Thank you.
[MUSIC FADES]
[1] (opens in new tab) It was later noted that the original Golden Record team was also led by astrophysicist Frank Drake (opens in new tab), whose efforts to search for extraterrestrial intelligence (SETI) inspired continued work in the area.
[2] (opens in new tab) While Dr. Jiang leads the Humanity’s Message to the Stars (opens in new tab) project, it is independent of NASA at this stage.
[3] (opens in new tab) In his capacity as Design Director for the original Golden Record, Lomberg (opens in new tab) chose and arranged the images included.
The post Collaborators: Silica in space with Richard Black and Dexter Greene appeared first on Microsoft Research.