| prompt |
string |
The prompt(s) to generate completions for. |
| max_tokens |
integer |
The maximum number of tokens that can be generated. |
| temperature |
number |
Sampling temperature to use. |
| top_p |
number |
Nucleus sampling parameter. |
| n |
integer |
How many completions to generate for each prompt. |
| stream |
boolean |
Whether to stream back partial progress. |
| stop |
string |
Up to 4 sequences where the API will stop generating further tokens. |
| presence_penalty |
number |
Penalizes new tokens based on whether they appear in the text so far. |
| frequency_penalty |
number |
Penalizes new tokens based on their existing frequency in the text. |
| user |
string |
A unique identifier representing your end-user. |