Improving the factual accuracy of language models through web browsing

We’ve fine-tuned GPT-3 to more accurately answer open-ended questions using a text-based web browser. Our prototype copies how humans research answers to questions online – it submits search queries, follows links, and scrolls up and down web pages. It is trained to cite its sources, which makes it easier to give feedback to improve factual accuracy. We’re excited about developing more truthful AI, but challenges remain, such as coping with unfamiliar types of questions.

Read paperBrowse samples

Language models like GPT-3 are useful for many different tasks, but have a tendency to “hallucinate” information when performing tasks requiring obscure real-world knowledge. To address this, we taught GPT-3 to use a text-based web-browser. The model is provided with an open-ended question and a summary of the browser state, and must issue commands such as “Search …”, “Find in page: …” or “Quote: …”. In this way, the model collects passages from web pages, and then uses these to compose an answer.

The model is fine-tuned from GPT-3 using the same general methods we’ve used previously. We begin by training the model to copy human demonstrations, which gives it the ability to use the text-based browser to answer questions. Then we improve the helpfulness and accuracy of the model’s answers, by training a reward model to predict human preferences, and optimizing against it using either reinforcement learning or rejection sampling.

Cherry-picked samples from our best-performing model (175B with best-of-64 against a reward model).

Explore more samples

ELI5 results

Our system is trained to answer questions from ELI5, a dataset of open-ended questions scraped from the “Explain Like I’m Five” subreddit. We trained three different models, corresponding to three different inference-time compute budgets. Our best-performing model produces answers that are preferred 56% of the time to answers written by our human demonstrators, with a similar level of factual accuracy. Even though these were the same kind of demonstrations used to train the model, we were able to outperform them by using human feedback to improve the model’s answers.

Results of human evaluations on the ELI5 test set, comparing our model with human demonstrators. The amount of rejection sampling (the n in best-of-n) was chosen to be compute-efficient. Error bars show ±1 standard error.

TruthfulQA results

For questions taken from the training distribution, our best model’s answers are about as factually accurate as those written by our human demonstrators, on average. However, out-of-distribution robustness is a challenge. To probe this, we evaluated our models on TruthfulQA, an adversarially-constructed dataset of short-form questions designed to test whether models fall prey to things like common misconceptions. Answers are scored on both truthfulness and informativeness, which trade off against one another (for example, “I have no comment” is considered truthful but not informative).

Our models outperform GPT-3 on TruthfulQA and exhibit more favourable scaling properties. However, our models lag behind human performance, partly because they sometimes quote from unreliable sources (as shown in the question about ghosts above). We hope to reduce the frequency of these failures using techniques like adversarial training.

TruthfulQA results. For GPT-3, we used the prompts and automated metric from the TruthfulQA paper. For the web-browsing model, we truncated the long-form answers and used human evaluation, since the answers are out-of-distribution for the automated metric. Error bars show ±1 standard error.

Evaluating factual accuracy

In order to provide feedback to improve factual accuracy, humans must be able to evaluate the factual accuracy of claims produced by models. This can be extremely challenging, since claims can be technical, subjective or vague. For this reason, we require the model to cite its sources. This allows humans to evaluate factual accuracy by checking whether a claim is supported by a reliable source. As well as making the task more manageable, it also makes it less ambiguous, which is important for reducing label noise.

However, this approach raises a number of questions. What makes a source reliable? What claims are obvious enough to not require support? What trade-off should be made between evaluations of factual accuracy and other criteria such as coherence? All of these were difficult judgment calls. We do not think that our model picked up on much of this nuance, since it still makes basic errors. But we expect these kinds of decisions to become more important as AI systems improve, and cross-disciplinary research is needed to develop criteria that are both practical and epistemically sound. We also expect further considerations such as transparency to be important.

Eventually, having models cite their sources will not be enough to evaluate factual accuracy. A sufficiently capable model would cherry-pick sources it expects humans to find convincing, even if they do not reflect a fair assessment of the evidence. There are already signs of this happening (see the questions about boats above). We hope to mitigate this using methods like debate.

Risks of deployment and training

Although our model is generally more truthful than GPT-3 (in that it generates false statements less frequently), it still poses risks. Answers with citations are often perceived as having an air of authority, which can obscure the fact that our model still makes basic errors. The model also tends to reinforce the existing beliefs of users. We are researching how best to address these and other concerns.

In addition to these deployment risks, our approach introduces new risks at train time by giving the model access to the web. Our browsing environment does not allow full web access, but allows the model to send queries to the Microsoft Bing Web Search API and follow links that already exist on the web, which can have side-effects. From our experience with GPT-3, the model does not appear to be anywhere near capable enough to dangerously exploit these side-effects. However, these risks increase with model capability, and we are working on establishing internal safeguards against them.

Conclusion

Human feedback and tools such as web browsers offer a promising path towards robustly truthful, general-purpose AI systems. Our current system struggles with challenging or unfamiliar circumstances, but still represents significant progress in this direction.

If you’d like to help us build more helpful and truthful AI systems, we’re hiring!


References
  1. O. Evans, O. Cotton-Barratt, L. Finnveden, A. Bales, A. Balwit, P. Wills, L. Righetti, and W. Saunders. Truthful AI: Developing and governing AI that does not lie. arXiv preprint arXiv:2110.06674, 2021.
  2. J. Maynez, S. Narayan, B. Bohnet, and R. McDonald. On faithfulness and factuality in abstractive summarization. arXiv preprint arXiv:2005.00661, 2020.
  3. K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston. Retrieval augmentation reduces hallucination in conversation. arXiv preprint arXiv:2104.07567, 2021.
  4. A. Fan, Y. Jernite, E. Perez, D. Grangier, J. Weston, and M. Auli. ELI5: Long form question answering. arXiv preprint arXiv:1907.09190, 2019.
  5. S. Lin, J. Hilton, and O. Evans. TruthfulQA: Measuring how models mimic human falsehoods. arXiv preprint arXiv:2109.07958, 2021.
  6. D. Metzler, Y. Tay, D. Bahri, and M. Najork. Rethinking search: Making experts out of dilettantes. arXiv preprint arXiv:2105.02274, 2021.


Acknowledgments

Thanks to our paper co-authors: Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, Roger Jiang, Karl Cobbe, Tyna Eloundou, Gretchen Krueger, Kevin Button, Matthew Knight and Benjamin Chess.

Thanks to those who helped with and provided feedback on this release: Steven Adler, Sam Altman, Beth Barnes, Miles Brundage, Kevin Button, Steve Dowling, Alper Ercetin, Matthew Knight, Gretchen Krueger, Ryan Lowe, Andrew Mayne, Bob McGrew, Mira Murati, Richard Ngo, Jared Salzano, Natalie Summers and Hannah Wong.

Thanks to the team at Surge AI for helping us with data collection, and to all of our contractors for providing demonstrations and comparisons, without which this project would not have been possible.


OpenAI

Customizing GPT-3 for Your Application

Developers can now fine-tune GPT-3 on their own data, creating a custom version tailored to their application. Customizing makes GPT-3 reliable for a wider variety of use cases and makes running the model cheaper and faster.

You can use an existing dataset of virtually any shape and size, or incrementally add data based on user feedback. With fine-tuning, one API customer was able to increase correct outputs from 83% to 95%. By adding new data from their product each week, another reduced error rates by 50%.

To get started, just run a single command in the OpenAI command line tool with a file you provide. Your custom version will start training and then be available immediately in our API.

Read documentation


Last year we trained GPT-3 and made it available in our API. With only a few examples, GPT-3 can perform a wide variety of natural language tasks, a concept called few-shot learning or prompt design. Customizing GPT-3 can yield even better results because you can provide many more examples than what’s possible with prompt design.

You can customize GPT-3 for your application with one command and use it immediately in our API:

openai api fine_tunes.create -t <train_file>

It takes less than 100 examples to start seeing the benefits of fine-tuning GPT-3 and performance continues to improve as you add more data. In research published last June, we showed how fine-tuning with less than 100 examples can improve GPT-3’s performance on certain tasks. We’ve also found that each doubling of the number of examples tends to improve quality linearly.

With one of our most challenging research datasets, Grade School Math problems, fine-tuning GPT-3 improves accuracy by 2 to 4x over what’s possible with prompt design.

Two sizes of GPT-3 models, Curie and Davinci, were fine-tuned on 8,000 examples from one of our most challenging research datasets, Grade School Math problems. We compare the models’ ability to solve problems when 10 completions are created.

Customizing GPT-3 improves the reliability of output, offering more consistent results that you can count on for production use-cases. One customer found that customizing GPT-3 reduced the frequency of unreliable outputs from 17% to 5%. Since custom versions of GPT-3 are tailored to your application, the prompt can be much shorter, reducing costs and improving latency.

Whether text generation, summarization, classification, or any other natural language task GPT-3 is capable of performing, customizing GPT-3 will improve performance.

Apps Powered by Customized Versions of GPT-3

Keeper Tax helps independent contractors and freelancers with their taxes. After a customer links their financial accounts, Keeper Tax uses various models to extract text and classify transactions. Using the classified data, Keeper Tax identifies easy-to-miss tax write-offs and helps customers file their taxes directly from the app. By customizing GPT-3, Keeper Tax is able to continuously improve results. Once a week, Keeper Tax adds around 500 new training examples to fine-tune their model, which is leading to about a 1% accuracy improvement each week, increasing accuracy from 85% to 93%.

Viable helps companies get insights from their customer feedback. By customizing GPT-3, Viable is able to transform massive amounts of unstructured data into readable natural language reports, highlighting top customer complaints, compliments, requests, and questions. Customizing GPT-3 has increased the reliability of Viable’s reports. By using a customized version of GPT-3, accuracy in summarizing customer feedback has improved from 66% to 90%. The result is tangible, intuitive information that customers need to inform their product decisions.

Sana Labs is a global leader in the development and application of AI to learning. The Sana learning platform powers personalized learning experiences for businesses by leveraging the latest ML breakthroughs to tailor the content for each individual. By customizing GPT-3 with their data, Sana’s question and content generation went from grammatically correct but general responses to highly accurate outputs. This yielded a 60% improvement, enabling fundamentally more personalized and effective experiences for their learners.

Elicit is an AI research assistant that helps people directly answer research questions using findings from academic papers. The tool finds the most relevant abstracts from a large corpus of research papers, then applies a customized version of GPT-3 to generate the claim (if any) that the paper makes about the question. A custom version of GPT-3 outperformed prompt design across three important measures: results were easier to understand (a 24% improvement), more accurate (a 17% improvement), and better overall (a 33% improvement).

All API customers can customize GPT-3 today. Sign-up and get started with the fine-tuning documentation.

How to customize GPT-3 for your application


Set up

  • Install the openai python-based client from your terminal:pip install --upgrade openai
  • Set your API key as an environment variable:export OPENAI_API_KEY=<api_key>

Train a custom model

  • Fine-tune the Ada model on a demo dataset for translating help messages from Spanish to English.
    openai api fine_tunes.create -m ada –n_epochs 2
    -t https://cdn.openai.com/API/train-demo.jsonl


    (Ctrl-C will interrupt the stream, but not cancel the fine-tune)
    [2021-12-08 12:11:30] Created fine-tune: ft-gK9R3N3lDQYQJD0SXqlF8Fnc
    [2021-12-08 12:11:40] Fine-tune costs $0.01
    [2021-12-08 12:11:40] Fine-tune enqueued. Queue number: 0
    [2021-12-08 12:11:45] Fine-tune started
    [2021-12-08 12:12:58] Completed epoch 1/2
    [2021-12-08 12:13:56] Completed epoch 2/2
    [2021-12-08 12:14:26] Uploaded model: ada:ft-org-2021-12-08-20-14-25
    [2021-12-08 12:14:29] Uploaded result file: file-QvY81nzrOhXMenjMS5OlPeBW
    [2021-12-08 12:14:30] Fine-tune succeeded
    Job complete! Status: succeeded 🎉
    Try out your fine-tuned model:
    openai api completions.create -m ada:ft-org-2021-12-08-20-14-25 -p <YOUR_PROMPT>

Use the custom model

  • Ask your customized model for a translation.
    openai api completions.create -m <model_ID>
    –max-tokens 30 –temperature 0 –stop “###”
    -p $’Conecte la PS3 y vaya a Configuración>Configuraciones de Red, seleccione la red y escriba sus credenciales.nEnglish translation:’


    Conecte la PS3 y vaya a Configuración>Configuraciones de Red, seleccione la red y escriba sus credenciales.nEnglish translation: Connect the PS3 and go to Settings> Accounts Settings, select the network and write your credentials.%


OpenAI

OpenAI Residency

OpenAI Residency

As part of our effort to support and develop AI talent, we’re excited to announce the OpenAI Residency. This new program offers a pathway to a full-time role at OpenAI for researchers and engineers who don’t currently focus on artificial intelligence. We are excited to get applications from everyone, and will make a special effort to hear from underrepresented groups in technology.

The program is an iteration of our former Scholars and Fellows programs. The Residency shifts the focus away from curriculum-based learning, instead giving Residents an opportunity to work collaboratively alongside OpenAI teams on active projects.

The first cohort of the six-month program begins in April 2022 and Residents will be compensated as fully salaried employees for the duration of the program.

“There are many talented people who want to contribute to AI but cannot find an easy way to do so,” said Ilya Sutskever, OpenAI’s Chief Scientist. “The Residency aims to address that, by teaching participants the most important practical AI skills in a hands-on way as quickly as possible. We’ve welcomed incredible new talent to OpenAI through our Fellows and Scholars programs, who have made major research contributions and helped advance OpenAI’s goal of building beneficial AGI.”

Over the last three years we’ve made more than 20 full-time hires through our mentorship programs, representing one in six members of our technical staff, and our new iteration will broaden the range of candidates we are considering.

Excellent work and experience can come from both inside and outside of the traditional education and work settings. OpenAI has long been home to many self-taught researchers and engineers. If you have an unconventional educational background, we encourage you to apply. Our goal is for this program to be as inclusive and diverse as possible, and we will provide immigration and relocation support to high-potential talent globally.

“We’re going to need the best, most diverse talent and innovative minds out there to achieve our mission,” said Sam Altman, OpenAI’s CEO. “This type of thinker might be at a university, they might be fresh out of high school, working at a cutting-edge tech company or building something on their own. This program is an excellent way for people who are curious, passionate, and skilled to sharpen their focus on AI and machine learning — and to help us invent the future.”

The AI Software Engineering track is a great match for people that have an engineering background and would like to advance to a Software Engineering position in an AI company. We are looking for candidates with engineering experience in fast-paced environments.

“What’s unique about being a software engineer at OpenAI is the scale and novelty of the problems you’re working on every day,” said Christina Kim, a 2021 Scholar who is now working full-time as a Software Engineer on our AppliedAI team. “OpenAI is at the cutting edge of engineering problems. If you’re excited about AI research, you can easily get involved in cross-functional work with a machine learning component by leveraging your engineering skills to further the company’s research.”

The AI Research track is ideal for people with a research background in a scientific non-ML field who would like to transition into a Research Scientist or Research Engineering position. We are looking for a record of achievement in another field such as mathematics, physics or neuroscience.

“This was my foot in the door to get into AI research,” said Christine McLeavey, a 2018 Scholar and Fellow who now manages OpenAI’s Multimodal team. “I had been studying on my own through online courses like deeplearning.ai and fast.ai for a year, and the support from OpenAI gave me the confidence to jump into the field full-time. I learned so much from being around such amazing researchers, and their mentorship had a huge influence on my project (MuseNet).”

“Having a mentor, gaining access to key infrastructure and tooling, and being part of the broader community at OpenAI helped me acquire a research taste, and to get to the interesting research questions a lot faster,” said Jonathan Ward, former Scholar and Fellow who will be returning to OpenAI in 2022 as a full-time Researcher on the Alignment team.

Applications for the Spring cohort in 2022 are open now through January 14, 2022 12AM (PST). Join us for a discussion with a panel of former OpenAI Scholars and Fellows on December 8 to learn more about the program. Add this event to your calendar.

Apply now

OpenAI

OpenAI’s API Now Available with No Waitlist

OpenAI is committed to the safe deployment of AI. Since the launch of our API, we’ve made deploying applications faster and more streamlined while adding new safety features. Our progress with safeguards makes it possible to remove the waitlist for GPT-3. Starting today, developers in supported countries can sign up and start experimenting with our API right away.

Improvements to our API over the past year include the Instruct Series models that adhere better to human instructions, specialized endpoints for more truthful question-answering, and a free content filter to help developers mitigate abuse. Our work also allows us to review applications before they go live, monitor for misuse, support developers as their product scales, and better understand the effects of this technology.

Other changes include an improved Playground, which makes it easy to prototype with our models, an example library with dozens of prompts to get developers started, and Codex, a new model that translates natural language into code.

Tens of thousands of developers are already taking advantage of powerful AI models through our platform. We believe that by opening access to these models via an easy-to-use API, more developers will find creative ways to apply AI to a large number of useful applications and open problems.

To ensure API-backed applications are built responsibly, we provide tools and help developers use best practices so they can bring their applications to production quickly and safely. As our systems evolve and we work to improve the capabilities of our safeguards, we expect to continue streamlining the process for developers, refining our Usage Guidelines, and allowing even more use cases over time.

As another step in this direction, we are also updating our Content Guidelines to clarify what kind of content our API can be used to generate. Our policies have always prohibited the use of our API in ways that do not adhere to the principles described in our charter, and content like hate speech remains prohibited.

To help developers ensure their applications are used for their intended purpose, prevent potential misuse, and adhere to our content guidelines, we offer developers a free content filter. We are currently testing targeted filters for specific content categories with some customers.

We are also prohibiting certain types of content on our API, like adult content, where our system is not currently able to reliably discern harmful from acceptable use. We are continually working to make our content filters more robust and we intend to allow acceptable use within some categories as our system improves.

We’re excited to have the safeguards in place to open up GPT-3 for more developers. As our safeguards continue to improve, we will expand how the API can be used while further improving the experience for our users. Sign up today and try it out.

OpenAI

Solving Math Word Problems

We’ve trained a system that solves grade school math problems with nearly twice the accuracy of a fine-tuned GPT-3 model. It solves about 90% as many problems as real kids: a small sample of 9-12 year olds scored 60% on a test from our dataset, while our system scored 55% on those same problems. This is important because today’s AI is still quite weak at commonsense multistep reasoning, which is easy even for grade school kids. We achieved these results by training our model to recognize its mistakes, so that it can try repeatedly until it finds a solution that works.

Read paperBrowse samplesDownload dataset

Introduction

Large language models like GPT-3 have many impressive skills, including their ability to imitate many writing styles, and their extensive factual knowledge. However, they struggle to perform tasks that require accurate multistep reasoning, like solving grade school math word problems. Although the model can mimic the cadence of correct solutions, it regularly produces critical errors in logic.

To match human performance in complex logical domains, our models must learn to recognize their mistakes and to choose their steps carefully. To that end, we train verifiers to evaluate whether or not a proposed solution is correct. To solve a new problem, we use verifiers to select the best among many proposed solutions. We collected the new GSM8K dataset to evaluate our methods, and we are releasing this dataset to facilitate research.

In the ten examples below, we show solutions generated by our new method, verification, and our baseline method, fine-tuning.

GSM8K Dataset

GSM8K consists of 8.5K high quality grade school math word problems. Each problem takes between 2 and 8 steps to solve, and solutions primarily involve performing a sequence of elementary calculations using basic arithmetic operations (+ − × ÷) to reach the final answer. Fine-tuned state-of-the-art language models perform poorly on this dataset, primarily due to the high diversity of problems. At the same time, GSM8K solutions depend only on elementary concepts, so achieving high test performance is a tractable goal.

Solutions in GSM8K are written as natural language rather than as pure math expressions. By sticking to natural language, model-generated solutions are more readily interpretable by humans, and our methods remain relatively domain agnostic.

Training Verifiers: Models that Learn from their Mistakes

One significant challenge in mathematical reasoning is the high sensitivity to individual mistakes. Autoregressive models, which generate each solution token by token, have no mechanism to correct their own errors. Solutions that veer off-course quickly become unrecoverable, as can be seen in the examples provided.

We address this problem by training verifiers to evaluate the correctness of model-generated solutions. Verifiers are given many possible solutions, all written by the model itself, and they are trained to decide which ones, if any, are correct.

To solve a new problem at test time, we generate 100 candidate solutions and then select the solution that is ranked highest by the verifier. Verifiers benefit from this inherent optionality, as well as from the fact that verification is often a simpler task than generation.

We find that we get a strong boost in performance from verification, as long as the dataset is large enough. With datasets that are too small, we believe that the verifiers overfit by memorizing the final answers in the training set, rather than learning any more useful properties of mathematical reasoning.

On the full training set, 6B parameter verification slightly outperforms a fine-tuned 175B parameter model, giving a performance boost that is approximately equivalent to a 30x model size increase. Moreover, verification appears to scale more effectively with additional data, if we extrapolate based on current results.

Conclusion

Producing correct arguments and recognizing incorrect ones are key challenges in developing more general AI. Grade school math is an ideal testbed for these capabilities. The problems in GSM8K are conceptually simple, yet one subtle mistake is enough to derail an entire solution. Identifying and avoiding such mistakes is a crucial skill for our models to develop. By training verifiers, we teach our models to separate the good solutions from the ones that didn’t quite work out. We expect these skills to become increasingly relevant as we attempt to apply our models to more logically complex domains.


Acknowledgments

Thanks to the team at Surge AI for performing the GSM8K data collection.

Thanks to our paper co-authors: Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, and Christopher Hesse.

Thanks to those who provided feedback on this release: Dan Hendrycks, Leo Gao, Alec Radford, Giambattista Parascandolo, Harri Edwards, Yura Burda, Nick Ryder, Ilya Sutskever, Mira Murati, Sam Altman, Aris Konstantinidis, Andrew Mayne, Hannah Wong, and Steve Dowling.

Thank you to the students who volunteered to take our test!

OpenAI

Summarizing Books with Human Feedback

Read paperBrowse samples

Summarizing Books with Human Feedback

To safely deploy powerful, general-purpose artificial intelligence in the future, we need to ensure that machine learning models act in accordance with human intentions. This challenge has become known as the alignment problem.

A scalable solution to the alignment problem needs to work on tasks where model outputs are difficult or time-consuming for humans to evaluate. To test scalable alignment techniques, we trained a model to summarize entire books, as shown in the following samples.[1] Our model works by first summarizing small sections of a book, then summarizing those summaries into a higher-level summary, and so on.

Explore more samples

Our best model is fine-tuned from GPT-3 and generates sensible summaries of entire books, sometimes even matching the average quality of human-written summaries: it achieves a 6/7 rating (similar to the average human-written summary) from humans who have read the book 5% of the time and a 5/7 rating 15% of the time. Our model also achieves state-of-the-art results on the BookSum dataset for book-length summarization. A zero-shot question-answering model can use our model’s summaries to obtain competitive results on the NarrativeQA dataset for book-length question answering.[2]

Our Approach: Combining Reinforcement Learning from Human Feedback and Recursive Task Decomposition

Consider the task of summarizing a piece of text. Large pretrained models aren’t very good at summarization. In the past we found that training a model with reinforcement learning from human feedback helped align model summaries with human preferences on short posts and articles. But judging summaries of entire books takes a lot of effort to do directly since a human would need to read the entire book, which takes many hours.

To address this problem, we additionally make use of recursive task decomposition: we procedurally break up a difficult task into easier ones. In this case we break up summarizing a long piece of text into summarizing several shorter pieces. Compared to an end-to-end training procedure, recursive task decomposition has the following advantages:

  1. Decomposition allows humans to evaluate model summaries more quickly by using summaries of smaller parts of the book rather than reading the source text.
  2. It is easier to trace the summary-writing process. For example, you can trace to find where in the original text certain events from the summary happen. See for yourself on our summary explorer!
  3. Our method can be used to summarize books of unbounded length, unrestricted by the context length of the transformer models we use.

Why We Are Working on This

This work is part of our ongoing research into aligning advanced AI systems, which is key to our mission. As we train our models to do increasingly complex tasks, making informed evaluations of the models’ outputs will become increasingly difficult for humans. This makes it harder to detect subtle problems in model outputs that could lead to negative consequences when these models are deployed. Therefore we want our ability to evaluate our models to increase as their capabilities increase.

Our current approach to this problem is to empower humans to evaluate machine learning model outputs using assistance from other models. In this case, to evaluate book summaries we empower humans with individual chapter summaries written by our model, which saves them time when evaluating these summaries relative to reading the source text. Our progress on book summarization is the first large-scale empirical work on scaling alignment techniques.

Going forward, we are researching better ways to assist humans in evaluating model behavior, with the goal of finding techniques that scale to aligning artificial general intelligence.

We’re always looking for more talented people to join us; so if this work interests you, please apply to join our team!


Acknowledgments

We’d like to acknowledge our paper co-authors: Long Ouyang, Daniel Ziegler, Nisan Stiennon, and Paul Christiano.

Thanks to the following for feedback on this release: Steve Dowling, Hannah Wong, Miles Brundage, Gretchen Krueger, Ilya Sutskever, and Sam Altman.


Design
Justin Jay Wang


Book Cover Artwork


Footnotes

  1. These samples were selected from works in the public domain, and are part of GPT-3’s pretraining data. To control for this effect, and purely for research purposes, our paper evaluates summaries of books the model has never seen before. ↩︎

  2. We’ve amended our original claim about results on NarrativeQA after being made aware of prior work with better results than ours. ↩︎

OpenAI

Helen Toner Joins OpenAI’s Board of Directors

Today, we’re excited to announce the appointment of Helen Toner to our Board of Directors. As the Director of Strategy at Georgetown’s Center for Security and Emerging Technology (CSET), Helen has deep expertise in AI policy and global AI strategy research. This appointment advances our dedication to the safe and responsible deployment of technology as a part of our mission to ensure general-purpose AI benefits all of humanity.

I greatly value Helen’s deep thinking around the long-term risks and effects of AI,” added Greg Brockman, OpenAI’s chairman and Chief Technology Officer. “I’m looking forward to the impact she will have on our progress towards achieving our mission.”

Helen brings an understanding of the global AI landscape with an emphasis on safety, which is critical for our efforts and mission,” said Sam Altman, OpenAI’s CEO. “We are delighted to add her leadership to our board.”

OpenAI is a unique organization in the AI research space, and has produced some of the advances, publications, and products I’m most excited about,” said Helen Toner. “I strongly believe in the organization’s aim of building AI for the benefit of all, and am honored to have this opportunity to contribute to that mission.”

Helen currently oversees CSET’s data-driven AI policy research, which provides nonpartisan analysis to the policy community. She previously advised policymakers and grantmakers on AI strategy while at Open Philanthropy. Helen also studied the AI landscape in China and is a trusted voice on national security implications for AI and ML between China and the United States. In a recent paper Helen co-authored for CSET, she stressed the importance of finding new methods to test AI models, and advocated for information sharing on AI accidents and collaboration across borders to minimize risk.

OpenAI

OpenAI Codex

OpenAI Codex

We’ve created an improved version of OpenAI Codex, our AI system that translates natural language to code, and we are releasing it through our API in private beta starting today. Codex is the model that powers GitHub Copilot, which we built and launched in partnership with GitHub a month ago. Proficient in more than a dozen programming languages, Codex can now interpret simple commands in natural language and execute them on the user’s behalf—making it possible to build a natural language interface to existing applications. We are now inviting businesses and developers to build on top of OpenAI Codex through our API.

Rewatch Live Demo


View the Codex Challenge


Read Paper

Watch Video

Creating a Space Game with OpenAI Codex

Watch Video

“Hello World” with OpenAI Codex

Watch Video

Data Science with OpenAI Codex

Watch Video

Talking to Your Computer with OpenAI Codex

Watch Video

Converting Python to Ruby with OpenAI Codex

Watch Video

Giving OpenAI Codex a First Grade Math Test

OpenAI Codex is a descendant of GPT-3; its training data contains both natural language and billions of lines of source code from publicly available sources, including code in public GitHub repositories. OpenAI Codex is most capable in Python, but it is also proficient in over a dozen languages including JavaScript, Go, Perl, PHP, Ruby, Swift and TypeScript, and even Shell. It has a memory of 14KB for Python code, compared to GPT-3 which has only 4KB—so it can take into account over 3x as much contextual information while performing any task.

GPT-3’s main skill is generating natural language in response to a natural language prompt, meaning the only way it affects the world is through the mind of the reader. OpenAI Codex has much of the natural language understanding of GPT-3, but it produces working code—meaning you can issue commands in English to any piece of software with an API. OpenAI Codex empowers computers to better understand people’s intent, which can empower everyone to do more with computers.

Once a programmer knows what to build, the act of writing code can be thought of as (1) breaking a problem down into simpler problems, and (2) mapping those simple problems to existing code (libraries, APIs, or functions) that already exist. The latter activity is probably the least fun part of programming (and the highest barrier to entry), and it’s where OpenAI Codex excels most.

OpenAI Codex is a general-purpose programming model, meaning that it can be applied to essentially any programming task (though results may vary). We’ve successfully used it for transpilation, explaining code, and refactoring code. But we know we’ve only scratched the surface of what can be done.

We’re now making OpenAI Codex available in private beta via our API, and we are aiming to scale up as quickly as we can safely. During the initial period, OpenAI Codex will be offered for free. OpenAI will continue building on the safety groundwork we laid with GPT-3—reviewing applications and incrementally scaling them up while working closely with developers to understand the effect of our technologies in the world.

OpenAI