ChatRequest - TypeScript SDK

ChatRequest type definition

The TypeScript SDK and docs are currently in beta. Report issues on GitHub.

Chat completion request parameters

Example Usage

1import { ChatRequest } from "@openrouter/sdk/models";
2
3let value: ChatRequest = {
4 messages: [
5 {
6 content: "You are a helpful assistant.",
7 role: "system",
8 },
9 {
10 content: "What is the capital of France?",
11 role: "user",
12 },
13 ],
14};

Fields

FieldTypeRequiredDescriptionExample
cacheControlmodels.AnthropicCacheControlDirectiveN/A{"type": "ephemeral"}
debugmodels.ChatDebugOptionsDebug options for inspecting request transformations (streaming only){"echo_upstream_body": true}
frequencyPenaltynumberFrequency penalty (-2.0 to 2.0)0
imageConfigRecord<string, *models.ChatRequestImageConfig*>Provider-specific image configuration options. Keys and values vary by model/provider. See https://openrouter.ai/docs/guides/overview/multimodal/image-generation for more details.{"aspect_ratio": "16:9"}
logitBiasRecord<string, *number*>Token logit bias adjustments{"50256": -100}
logprobsbooleanReturn log probabilitiesfalse
maxCompletionTokensnumberMaximum tokens in completion100
maxTokensnumberMaximum tokens (deprecated, use max_completion_tokens). Note: some providers enforce a minimum of 16.100
messagesmodels.ChatMessages[]✔️List of messages for the conversation[
{"content": "Hello!","role": "user"}
]
metadataRecord<string, *string*>Key-value pairs for additional object information (max 16 pairs, 64 char keys, 512 char values){"session_id": "session-456","user_id": "user-123"}
modalitiesmodels.Modality[]Output modalities for the response. Supported values are “text”, “image”, and “audio”.[
“text”,
“image”
]
modelstringModel to use for completionopenai/gpt-4
modelsstring[]Models to use for completion[
“openai/gpt-4”,
“openai/gpt-4o”
]
parallelToolCallsbooleanWhether to enable parallel function calling during tool use. When true, the model may generate multiple tool calls in a single response.true
pluginsmodels.ChatRequestPlugin[]Plugins you want to enable for this request, including their settings.
presencePenaltynumberPresence penalty (-2.0 to 2.0)0
providermodels.ProviderPreferencesWhen multiple model providers are available, optionally indicate your routing preference.{"allow_fallbacks": true}
reasoningmodels.ReasoningConfiguration options for reasoning models{"effort": "medium","summary": "concise"}
responseFormatmodels.ResponseFormatResponse format configuration{"type": "json_object"}
seednumberRandom seed for deterministic outputs42
serviceTiermodels.ChatRequestServiceTierThe service tier to use for processing this request.auto
sessionIdstringA unique identifier for grouping related requests (e.g., a conversation or agent workflow) for observability. If provided in both the request body and the x-session-id header, the body value takes precedence. Maximum of 256 characters.
stopmodels.StopStop sequences (up to 4)[
""
]
streambooleanEnable streaming responsefalse
streamOptionsmodels.ChatStreamOptionsStreaming configuration options{"include_usage": true}
temperaturenumberSampling temperature (0-2)0.7
toolChoicemodels.ChatToolChoiceTool choice configurationauto
toolsmodels.ChatFunctionTool[]Available tools for function calling[
{"function": {"description": "Get weather","name": "get_weather"},
“type”: “function”
}
]
topLogprobsnumberNumber of top log probabilities to return (0-20)5
topPnumberNucleus sampling parameter (0-1)1
tracemodels.TraceConfigMetadata for observability and tracing. Known keys (trace_id, trace_name, span_name, generation_name, parent_span_id) have special handling. Additional keys are passed through as custom metadata to configured broadcast destinations.{"trace_id": "trace-abc123","trace_name": "my-app-trace"}
userstringUnique user identifieruser-123