ChatGPT · Schema
CreateResponseRequest
AgentsAIChatGPTEmbeddingsFine-TuningGPT-4GPT-5Language ModelOpenAIRealtime
Properties
| Name | Type | Description |
|---|---|---|
| model | string | Model ID used to generate the response, like gpt-4o or o3. OpenAI offers a wide range of models with different capabilities, performance characteristics, and price points. |
| input | string | Text, image, or file inputs to the model, used to generate a response. Can be a string for simple text input, or an array of input items for multi-turn and multimodal inputs. |
| instructions | string | A system (or developer) message inserted at the beginning of the model's context. Use this for top-level guidance on the model's behavior, tone, or constraints. |
| previous_response_id | string | The unique ID of the previous response to the model. Use this to create multi-turn conversations. The model will use the previous response as context. |
| max_output_tokens | integer | An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens. |
| temperature | number | What sampling temperature to use, between 0 and 2. Higher values like 0.8 make the output more random, while lower values like 0.2 make it more focused and deterministic. |
| top_p | number | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. |
| tools | array | An array of tools the model may use to generate a response. Supports built-in tools (web_search_preview, file_search, code_interpreter, computer_use_preview) and custom function tools. |
| tool_choice | string | How the model should select which tool to use. auto allows the model to decide, required forces the model to use a tool, none disables tool use. |
| truncation | string | The truncation strategy to use for the model context. auto will use the model-defined default truncation strategy. disabled will error if the context exceeds the model's context window. |
| text | object | Configuration for text response format. |
| reasoning | object | Configuration for reasoning models (o-series). Controls how much reasoning effort the model uses. |
| store | boolean | Whether to store the generated response for later retrieval via the GET /responses endpoint. Defaults to true. |
| metadata | object | Set of 16 key-value pairs that can be attached to an object. Useful for storing additional information in a structured format. |
| stream | boolean | If set to true, the model response data will be streamed as server-sent events to the client. |
| parallel_tool_calls | boolean | Whether to allow the model to run tool calls in parallel. |
| user | string | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. |