Chat - Python SDK

Chat method reference

The Python SDK and docs are currently in beta. Report issues on GitHub.

Overview

Available Operations

  • send - Create a chat completion

send

Sends a request for a model response for the given chat conversation. Supports both streaming and non-streaming modes.

Example Usage

1from openrouter import OpenRouter
2import os
3
4with OpenRouter(
5 http_referer="<value>",
6 x_open_router_title="<value>",
7 x_open_router_categories="<value>",
8 api_key=os.getenv("OPENROUTER_API_KEY", ""),
9) as open_router:
10
11 res = open_router.chat.send(messages=[
12 {
13 "role": "system",
14 "content": "You are a helpful assistant.",
15 },
16 {
17 "role": "user",
18 "content": "What is the capital of France?",
19 },
20 ], stream=False)
21
22 with res as event_stream:
23 for event in event_stream:
24 # handle event
25 print(event, flush=True)

Parameters

ParameterTypeRequiredDescriptionExample
messagesList[components.ChatMessages]✔️List of messages for the conversation[
{"role": "user","content": "Hello!"}
]
http_refererOptional[str]The app identifier should be your app’s URL and is used as the primary identifier for rankings.
This is used to track API usage per application.
x_open_router_titleOptional[str]The app display name allows you to customize how your app appears in OpenRouter’s dashboard.
x_open_router_categoriesOptional[str]Comma-separated list of app categories (e.g. “cli-agent,cloud-agent”). Used for marketplace rankings.
providerOptionalNullable[components.ChatRequestProvider]When multiple model providers are available, optionally indicate your routing preference.
pluginsList[components.ChatRequestPluginUnion]Plugins you want to enable for this request, including their settings.
userOptional[str]Unique user identifieruser-123
session_idOptional[str]A unique identifier for grouping related requests (e.g., a conversation or agent workflow) for observability. If provided in both the request body and the x-session-id header, the body value takes precedence. Maximum of 256 characters.
traceOptional[components.ChatRequestTrace]Metadata for observability and tracing. Known keys (trace_id, trace_name, span_name, generation_name, parent_span_id) have special handling. Additional keys are passed through as custom metadata to configured broadcast destinations.
modelOptional[str]Model to use for completionopenai/gpt-4
modelsList[str]Models to use for completion[
“openai/gpt-4”,
“openai/gpt-4o”
]
frequency_penaltyOptional[float]Frequency penalty (-2.0 to 2.0)0
logit_biasDict[str, float]Token logit bias adjustments{"50256": -100}
logprobsOptionalNullable[bool]Return log probabilitiesfalse
top_logprobsOptional[int]Number of top log probabilities to return (0-20)5
max_completion_tokensOptional[int]Maximum tokens in completion100
max_tokensOptional[int]Maximum tokens (deprecated, use max_completion_tokens). Note: some providers enforce a minimum of 16.100
metadataDict[str, str]Key-value pairs for additional object information (max 16 pairs, 64 char keys, 512 char values){"user_id": "user-123","session_id": "session-456"}
presence_penaltyOptional[float]Presence penalty (-2.0 to 2.0)0
reasoningOptional[components.Reasoning]Configuration options for reasoning models{"effort": "medium","summary": "concise"}
response_formatOptional[components.ResponseFormat]Response format configuration{"type": "json_object"}
seedOptional[int]Random seed for deterministic outputs42
stopOptionalNullable[components.Stop]Stop sequences (up to 4)[
""
]
streamOptional[bool]Enable streaming responsefalse
stream_optionsOptionalNullable[components.ChatStreamOptions]Streaming configuration options{"include_usage": true}
temperatureOptional[float]Sampling temperature (0-2)0.7
parallel_tool_callsOptionalNullable[bool]N/A
tool_choiceOptional[components.ChatToolChoice]Tool choice configurationauto
toolsList[components.ChatFunctionTool]Available tools for function calling[
{"type": "function","function": {"name": "get_weather","description": "Get weather"}
}
]
top_pOptional[float]Nucleus sampling parameter (0-1)1
debugOptional[components.ChatDebugOptions]Debug options for inspecting request transformations (streaming only){"echo_upstream_body": true}
image_configDict[str, components.ChatRequestImageConfig]Provider-specific image configuration options. Keys and values vary by model/provider. See https://openrouter.ai/docs/guides/overview/multimodal/image-generation for more details.{"aspect_ratio": "16:9"}
modalitiesList[components.Modality]Output modalities for the response. Supported values are “text”, “image”, and “audio”.[
“text”,
“image”
]
cache_controlOptional[components.CacheControl]Enable automatic prompt caching. When set, the system automatically applies cache breakpoints to the last cacheable block in the request. Currently supported for Anthropic Claude models.
service_tierOptionalNullable[components.ChatRequestServiceTier]The service tier to use for processing this request.auto
retriesOptional[utils.RetryConfig]Configuration to override the default retry behavior of the client.

Response

operations.SendChatCompletionRequestResponse

Errors

Error TypeStatus CodeContent Type
errors.BadRequestResponseError400application/json
errors.UnauthorizedResponseError401application/json
errors.PaymentRequiredResponseError402application/json
errors.NotFoundResponseError404application/json
errors.RequestTimeoutResponseError408application/json
errors.PayloadTooLargeResponseError413application/json
errors.UnprocessableEntityResponseError422application/json
errors.TooManyRequestsResponseError429application/json
errors.InternalServerResponseError500application/json
errors.BadGatewayResponseError502application/json
errors.ServiceUnavailableResponseError503application/json
errors.EdgeNetworkTimeoutResponseError524application/json
errors.ProviderOverloadedResponseError529application/json
errors.OpenRouterDefaultError4XX, 5XX*/*