Claude · Schema

CreateMessageRequest

Artificial IntelligenceChatbotConversational AIGenerative AILarge Language ModelsMachine LearningNatural Language Processing

Properties

Name Type Description
model string The model that will complete your prompt. See the list of available models.
max_tokens integer The maximum number of tokens to generate before stopping. Different models support different maximum values.
messages array Input messages. Alternating user and assistant conversational turns. Maximum 100,000 messages per request.
system string System prompt providing context and instructions to Claude. Can be a string or array of content blocks.
temperature number Amount of randomness injected into the response. Ranges from 0.0 to 1.0. Use closer to 0.0 for analytical tasks, closer to 1.0 for creative tasks.
top_p number Use nucleus sampling. Recommended to use either temperature or top_p, but not both.
top_k integer Only sample from the top K options for each subsequent token.
stop_sequences array Custom text sequences that will cause the model to stop generating.
stream boolean Whether to incrementally stream the response using server-sent events.
tools array Definitions of tools that the model may use.
service_tier string Service tier to use for this request.
View JSON Schema on GitHub