Snowflake · Schema
CompleteRequest
LLM text completion request.
Data LakesData SharingData WarehousingDatabaseSQL
Properties
| Name | Type | Description |
|---|---|---|
| model | string | The model name. See documentation for possible values. |
| messages | array | |
| temperature | number | Temperature controls the amount of randomness used in response generation. A higher temperature corresponds to more randomness. |
| top_p | number | Threshold probability for nucleus sampling. A higher top-p value increases the diversity of tokens that the model considers, while a lower value results in more predictable output. |
| max_tokens | integer | The maximum number of output tokens to produce. The default value is model-dependent. |
| max_output_tokens | integer | Deprecated in favor of "max_tokens", which has identical behavior. |
| response_format | object | An object describing response format config for structured-output mode. |
| tools | array | List of tools to be used during tool calling |
| provisioned_throughput_id | string | The provisioned throughput ID to be used with the request. |
| sf-ml-xp-inflight-prompt-action | string | Reserved |
| sf-ml-xp-inflight-prompt-client-id | string | Reserved |
| sf-ml-xp-inflight-prompt-public-key | string | Reserved |
| stream | boolean | Reserved |