Create Text Completion with OpenAI

Perform a text completion with OpenAI. Can be used for a variety of tasks

by @pixies

How to Use

Use this brick to create text completions using the OpenAI Text Completion API.


Inputs

Name Required Type Description
stop array Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
topP number An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.
model string ID of the model to use. You can use the List models API to see all of your available models, or see the Model overview for descriptions of them: https://beta.openai.com/docs/models/overview
prompt string The prompt to generate completions for, encoded as a string
suffix string The suffix that comes after a completion of inserted text.
maxTokens integer The maximum number of tokens to generate in the completion. The token count of your prompt plus maxTokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).
openaiapi @pixies/openai/openaiapi integration
temperature number What sampling temperature to use. Higher values means the model will take more risks. Try 0.9 for more creative applications, and 0 (argmax sampling) for ones with a well-defined answer. We generally recommend altering this or top_p but not both.
numCompletions integer How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop
presencePenalty number Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
frequencyPenalty number Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.

Outputs

Name Required Type Description
completions array The generated completions

Related Tags