Calls GPT3 and returns the result as a Stream
openai API key:
|prompt||string||Yes||The prompt to pass GPT3|
|model||enum('text-davinci-003', 'text-curie-001', 'text-babbage-001', 'text-ada-001', 'code-davinci-002', 'code-cushman-002')||No||ID of the model to use||text-davinci-003|
|max_tokens||number||No||The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096).||200|
|temperature||number||No||What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or
|top_p||number||No||An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. It is generally recommend altering this or
|frequency_penalty||number||No||Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.||0|
|presence_penalty||number||No||Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.||0|
The output of this Node is not an
object, rather a
Read more about Streaming here.