Prompt Engineering Tutorial
Advanced tutorial of Prompt Engineering
OpenAI Playground

OpenAI Playground Usage

If you want to better understand OpenAI's APIs, or if you find that ChatGPT is not available, I suggest you use OpenAI's Playground. It is more stable.

However, be aware that using the Playground will consume your free credits.

On the right side of the interface, you will see the following parameters:

  • Mode: There are four modes available:
    • Complete: This is the default mode. It allows you to write a prompt and the Playground will generate text to complete it.
    • Chat: This mode allows you to have a conversation with the Playground.
    • Code: This mode allows you to write code and the Playground will execute it.
    • Summarization: This mode allows you to summarize a piece of text.
  • Model: You can switch models here. Different models are better at different things, so choosing the right model for the task can save you a lot of costs.
    • Ada: This is the cheapest, but fastest model. It is best suited for simple tasks, such as parsing text or correcting addresses.
    • Babbage: This model is slightly more expensive and faster than Ada. It is best suited for more complex tasks, such as classification or semantic search.
    • Curie: This model is very good at text tasks, such as writing articles, translating languages, or writing summaries.
    • Davinci: This is the most capable model in the GPT-3 series. It can generate high-quality, long answers and process up to 4000 tokens per request. It is best suited for complex tasks with causal relationships, such as idea generation, search, and paragraph summarization.
  • Temperature: This controls the randomness of the results generated by the model. A lower temperature will produce more certain results, but they may be more mundane or boring. A higher temperature will produce more unexpected results, but they may be less accurate.
  • Maximum length: This sets the maximum length of the content generated at a time.
  • Stop sequence: This is a specific string sequence that stops the model from generating text. If the generated text contains this sequence, the model will stop generating more text.
  • Top P: This is a technique used for nucleus sampling that controls the probability distribution of model-generated text. This affects the diversity and determinism of the model-generated text. If you want an accurate answer, you can set it to a lower value. If you want a more varied response, you can set it higher.
  • Presence penalty: This controls whether the model avoids using specific words or phrases when generating text. This can be used for sensitive topics or specific scenarios where the text is generated.
  • Best of: This allows you to set how many texts are generated and select the best text as the output. The default is 1, which means that only one text output is generated.
  • Injection start text: This allows you to add custom text at the beginning of the input text. This affects the result of the model.
  • Injection restart text: This allows you to add custom text somewhere in the middle of the input text. This affects the results that the model continues to generate.
  • Show probabilities: This allows you to see the probability of the model generating each word. When this option is turned on, you can see that each generated text word is followed by a string of numbers that indicate the probability that the model generated that word.

Once you have configured the parameters, you can type a prompt in the left text box and press Enter to test the prompt.