We are introducing a new-and-improved content moderation tool: The Moderation endpoint improves upon our previous content filter, and is available for free today to OpenAI API developers.
To help developers protect their applications against possible misuse, we are introducing the faster and more accurate Moderation endpoint. This endpoint provides OpenAI API developers with free access to GPT-based classifiers that detect undesired content — an instance of using AI systems to assist with human supervision of these systems. We have also released both a technical paper describing our methodology and the dataset used for evaluation.
When given a text input, the Moderation endpoint assesses whether the content is sexual, hateful, violent, or promotes self-harm — content prohibited by our content policy. The endpoint has been trained to be quick, accurate, and to perform robustly across a range of applications. Importantly, this reduces the chances of products “saying” the wrong thing, even when deployed to users at-scale. As a consequence, AI can unlock benefits in sensitive settings, like education, where it could not otherwise be used with confidence.
input text
Violence
Self-harm
Hate
Sexual
Moderation endpoint
Flagged
Flagged
The Moderation endpoint helps developers to benefit from our infrastructure investments. Rather than build and maintain their own classifiers—an extensive process, as we document in our paper—they can instead access accurate classifiers through a single API call.
As part of OpenAI’s commitment to making the AI ecosystem safer, we are providing this endpoint to allow free moderation of all OpenAI API-generated content. For instance, Inworld, an OpenAI API customer, uses the Moderation endpoint to help their AI-based virtual characters “stay on-script”. By leveraging OpenAI’s technology, Inworld can focus on their core product – creating memorable characters.
Additionally, we welcome the use of the endpoint to moderate content not generated with the OpenAI API. In one case, the company NGL – an anonymous messaging platform, with a focus on safety – uses the Moderation endpoint to detect hateful language and bullying in their application. NGL finds that these classifiers are capable of generalizing to the latest slang, allowing them to remain more-confident over time. Use of the Moderation endpoint to monitor non-API traffic is in private beta and will be subject to a fee. If you are interested, please reach out to us at support@openai.com.
Get started with the Moderation endpoint by checking out the documentation. More details of the training process and model performance are available in our paper. We have also released an evaluation dataset, featuring Common Crawl data labeled within these categories, which we hope will spur further research in this area.
We are introducing a new and improved content moderation tool. The Moderation endpoint improves upon our previous content filter, and is available for free today to OpenAI API developers.OpenAI Blog
We’ll invite 1 million people from our waitlist over the coming weeks. Users can create with DALL·E using free credits that refill every month, and buy additional credits in 115-generation increments for $15.
DALL·E, the AI system that creates realistic images and art from a description in natural language, is now available in beta. Today we’re beginning the process of inviting 1 million people from our waitlist over the coming weeks.
Every DALL·E user will receive 50 free credits during their first month of use and 15 free credits every subsequent month. Each credit can be used for one original DALL·E prompt generation — returning four images — or an edit or variation prompt, which returns three images.
A powerful creative tool
DALL·E allows users to create quickly and easily, and artists and creative professionals are using DALL·E to inspire and accelerate their creative processes. We’ve already seen people use DALL·E to make music videos for young cancer patients, create magazine covers, and bring novel concepts to life.
Other features include:
Edit allows users to make realistic and context-aware edits to images they generate with DALL·E or images they upload using a natural language description.
Variations can take an image generated by DALL·E or an image uploaded by a user and create different variations of it inspired by the original.
My Collection allows users to save generations right in the DALL·E platform.
Pricing
In this first phase of the beta, users can buy additional DALL·E credits in 115-credit increments (460 images[1]) for $15 on top of their free monthly credits. One credit is applied each time a prompt is entered and a user hits “generate” or “variations.”
As we learn more and gather user feedback, we plan to explore other options that will align with users’ creative processes.
Using DALL·E for commercial projects
Starting today, users get full usage rights to commercialize the images they create with DALL·E, including the right to reprint, sell, and merchandise. This includes images they generated during the research preview.
Users have told us that they are planning to use DALL·E images for commercial projects, like illustrations for children’s books, art for newsletters, concept art and characters for games, moodboards for design consulting, and storyboards for movies.
Safety
Prior to making DALL·E available in beta, we’ve worked with researchers, artists, developers, and other users to learn about risks and have taken steps to improve our safety systems based on learnings from the research preview, including:
Curbing misuse: To minimize the risk of DALL·E being misused to create deceptive content, we reject image uploads containing realistic faces and attempts to create the likeness of public figures, including celebrities and prominent political figures. We also used advanced techniques to prevent photorealistic generations of real individuals’ faces.
Preventing harmful images: We’ve made our content filters more accurate so that they are more effective at blocking images that violate our content policy — which does not allow users to generate violent, adult, or political content, among other categories — while still allowing creative expression. We also limited DALL·E’s exposure to these concepts by removing the most explicit content from its training data.
Reducing bias: We implemented a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population. This technique is applied at the system level when DALL·E is given a prompt about an individual that does not specify race or gender, like “CEO.”
Monitoring: We will continue to have automated and human monitoring systems to help guard against misuse.
Subsidized access for qualifying artists
We hope to make DALL·E as accessible as possible. Artists who are in need of financial assistance will be able to apply for subsidized access. Please fill out this interest form if you’d like to be notified once more details are available.
We are excited to see what people create with DALL·E and look forward to users’ feedback during this beta period.
We’ll invite 1 million people from our waitlist over the coming weeks. Users can create with DALL·E using free credits that refill every month, and buy additional credits in 115-generation increments for $15.OpenAI Blog
Today, we are implementing a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population. This technique is applied at the system level when DALL·E is given a prompt describing a person that does not specify race or gender, like “firefighter.”
Based on our internal evaluation, users were 12× more likely to say that DALL·E images included people of diverse backgrounds after the technique was applied. We plan to improve this technique over time as we gather more data and feedback.
A photo of a CEO
Generate
In April, we started previewing the DALL·E 2 research to a limited number of people, which has allowed us to better understand the system’s capabilities and limitations and improve our safety systems.
During this preview phase, early users have flagged sensitive and biased images which have helped inform and evaluate this new mitigation.
We are continuing to research how AI systems, like DALL·E, might reflect biases in its training data and different ways we can address them.
During the research preview we have taken other steps to improve our safety systems, including:
Minimizing the risk of DALL·E being misused to create deceptive content by rejecting image uploads containing realistic faces and attempts to create the likeness of public figures, including celebrities and prominent political figures.
Making our content filters more accurate so that they are more effective at blocking prompts and image uploads that violate our content policy while still allowing creative expression.
Refining automated and human monitoring systems to guard against misuse.
These improvements have helped us gain confidence in the ability to invite more users to experience DALL·E.
Expanding access is an important part of our deploying AI systems responsibly because it allows us to learn more about real-world use and continue to iterate on our safety systems.
Today, we are implementing a new technique so that DALL·E generates images of people that more accurately reflect the diversity of the world’s population.OpenAI Blog
As part of our DALL·E 2 research preview, more than 3,000 artists from more than 118 countries have incorporated DALL·E into their creative workflows. The artists in our early access group have helped us discover new uses for DALL·E and have served as key voices as we’ve made decisions about DALL·E’s features.
Creative professionals using DALL·E today range from illustrators, AR designers, and authors to chefs, landscape architects, tattoo artists, and clothing designers, to directors, sound designers, dancers, and much more. The list expands every day.
Below are just a few examples of how artists are making use of this new technology:
The Orrigos
James and his wife Kristin Orrigo created the Big Dreams Virtual Tour which focuses on creating special memories and a positive distraction for pediatric cancer patients around the world. The Orrigos have worked in top children’s hospitals around the country and now virtually meet up with families, bringing children’s ideas to life through personalized cartoons, music videos, and mobility friendly video games. Orrigo says children and teens light up when they see their DALL·E-generated creations, and they are ready to be the star of a story brought to life from their imaginations.
Most recently, Orrigo and his team have been working with a young cancer survivor named Gianna to create a music video featuring herself as Wonder Woman fighting her enemy — the cancer cells.
“We didn’t know what an osteosarcoma villain would look like so we turned to DALL·E as our creative outlet. DALL·E gave us a huge amount of inspiration,” Orrigo said. “Unfortunately, Gianna knows this battle all too well. But we are celebrating her victory by bringing her cartoon music video to real life to spread awareness about pediatric cancer and to give Gianna an unforgettable memory.”
In a project conceived by Austrian artist Stefan Kutzenberger and Clara Blume, Head of the Open Austria Art + Tech Lab in San Francisco, DALL·E was used to bring the poetry of revolutionary painter Egon Schiele into the visual world. Schiele died at 28, but Kutzenberger — a curator at the Leopold Museum in Vienna, which houses the world’s largest collection of Schiele’s works — believes that DALL·E gives the world a glimpse of what Schiele’s later work might have been like if he had had a chance to keep painting. The DALL·E works will be exhibited alongside Schiele’s collection in the Leopold Museum in the coming months.
Karen X Cheng
Karen X Cheng, a director known for sharing her creative experiments on Instagram, created the latest cover of Cosmopolitan Magazine using DALL·E. In her post unveiling the process, Karen compared working with DALL·E to a musician playing an instrument.
“Like any musical instrument, you get better with practice…and knowing what words to use to communicate? That’s a community effort — it’s come from the past few months of me talking to other DALL·E artists on Twitter / Discord / DM. I learned from other artists that you could ask for specific camera angles. Lens types. Lighting conditions. We’re all figuring it out together, how to play this beautiful new instrument.”
Israeli chef and MasterChef winner Tom Aviv is debuting his first U.S. restaurant in Miami in a few months and has used DALL·E for menu, decor, and ambiance inspiration — and his team have also used DALL·E to in designing the way they plate dishes.
It was Tom’s sister and business partner Kim’s idea to run a family recipe for chocolate mousse through DALL·E.
“It’s called Picasso chocolate mousse, and it’s a tribute to my parents,” she explained. “DALL·E elevates it to another level — it is just phenomenal. It changed the dish from your usual chocolate mousse to something that does service to the name and to our parents. It blew our minds.”
XR creator Don Allen Stevenson III has used DALL·E to paint physical paintings, design wearable sneakers, and create characters to transform into 3D renders for AR filters. “It feels like having a genie in a bottle that I can collaborate with,” he said.
Stevenson’s real passion is education — specifically making technology accessible to more people. He hosts a weekly Instagram Live teaching people about DALL·E and other tools for creative innovation.
“Digital tools freed me up to have a life that I am proud of and love,” Stevenson says. “I want to help other people to see creative technology like DALL·E the way that I see it — so they can become free as well.”
Danielle Baskin, a multimedia artist, says she plans to incorporate DALL·E generations across a number of different art forms: product design, illustration, theater, and alternative realities.
“It’s a mood board, vibe generator, illustrator, art curator, and museum docent,” Baskin says. “It’s an infinite museum where I can choose which private collections I want to visit. Sometimes I need to repair the private collections (tweak my prompt writing). Sometimes the collection isn’t quite there. But sometimes the docent (DALL·E 2) shows me a surprising new collection I didn’t know existed.”
August Kamp, a multimedia artist and musician, says she views DALL·E as a sort of imagination interpreter.
“Conceptualizing one’s ideas is one of the most gatekept processes in the modern world,” Kamp says. “Everyone has ideas — not everyone has access to training or encouragement enough to confidently render them. I feel empowered by the ability to creatively iterate on a feeling or idea, and I deeply believe that all people deserve that sense of empowerment.
Chad Nelson has been using DALL·E to create highly detailed creatures — and he’s made more than 100 of them.
“I had a vision for a cast of charming woodland critters, each oozing with personality and emotional nuance,” Nelson said. His characters range from “a red furry monster looks in wonder at a burning candle” to “a striped hairy monster shakes its hips dancing underneath a disco ball” — each crafted to capture the most human thing of all — feelings.
“DALL·E is the most advanced paint brush I’ve ever used,” Nelson says. “As mind-blowing and amazing as DALL·E is, like the paint brush, it too must be guided by the artist. It still needs that creative spark, that lightbulb in the mind to innovate — to create that something from nothing.”
As part of our DALL·E 2 research preview, more than 3,000 artists from more than 118 countries have incorporated DALL·E into their creative workflows. The artists in our early access group have helped us discover new uses for DALL·E and have served as key voices as we’ve made decisions about DALL·E’s features.OpenAI Blog