Complete chat in OpenAI.
Application
OpenAI
Inputs (what you have)
Name | Description | Data Type | Required? | Example |
Connected Account | The connected account to use for the request | Connected Account | Yes | |
Model | The model which will generate the completion | Text (Short) | No | |
Message Role 1 | Predefined Choice List | No | ||
Message Content 1 | Text (Long) | No | ||
Message Role 2 | Predefined Choice List | No | ||
Message Content 2 | Text (Short) | No | ||
Message Role 3 | Predefined Choice List | No | ||
Message Content 3 | Text (Long) | No | ||
Simplify? | Whether to return a simplified version of the response instead of the raw data | True/False | No | |
Frequency penalty | Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim | Number with decimals | No | |
Maximum number of tokens | The maximum number of tokens to generate in the completion. Most models have a context length of 2048 tokens (except for the newest models, which support 32,768) | Number | No | |
Number of completions | How many completions to generate for each prompt. Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop | Number | No | |
Presence penalty | Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics | Number with decimals | No | |
Sampling temperature | Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive | Number with decimals | No | |
Top p | Controls diversity via nucleus sampling: 0.5 means half of all likelihood-weighted options are considered. We generally recommend altering this or temperature but not both | Number with decimals | No |
Outputs (what you get)
Name | Description | Data Type | Required? | Example |
JSON Output | JSON output returned by the API | Text (Long) | No |
Outcomes
Name | Description |
Success | This status is selected if the job has successfully completed. |
Unsuccessful | This status is selected if the job has completed unsuccessfully. |
Requirements
N/A