Create a model response for the given chat conversation.
openapi-spec
Overview
The Chat Completions API allows you to generate text responses using various AI models. It supports multi-turn conversations with system, user, and assistant messages.Authentication
All requests require a Bearer token in the Authorization header:Request Example
Response Example
Parameters
| Parameter | Type | Required | Default | Description |
|---|---|---|---|---|
| model | string | Yes | - | ID of the model to use |
| messages | array | Yes | - | List of messages in the conversation |
| temperature | number | No | 1 | Sampling temperature (0-2), higher values make output more random, lower values make it more focused |
| max_tokens | integer | No | inf | Maximum tokens to generate |
| top_p | number | No | 1 | Nucleus sampling parameter, not recommended to modify both temperature and top_p |
| n | integer | No | 1 | Number of chat completion choices to generate for each input message |
| stream | boolean | No | false | Enable streaming responses |
| stop | string/array | No | null | Up to 4 sequences where the API will stop generating further tokens |
| frequency_penalty | number | No | 0 | Frequency penalty (-2 to 2), positive values decrease likelihood of repeating the same line |
| presence_penalty | number | No | 0 | Presence penalty (-2 to 2), positive values increase likelihood of talking about new topics |
| logit_bias | object | No | null | Modify the likelihood of specified tokens appearing in the completion |
| user | string | No | - | Unique identifier representing your end-user |
| response_format | object | No | - | Specify output format, set {"type": "json_object"} to enable JSON mode |
| tools | array | No | - | List of tools the model may call |
| tool_choice | object | No | auto | Controls which function the model calls |
Available Models
gpt-4gpt-4-turbogpt-3.5-turbogpt-4ogpt-4o-minigpt-4-gizmo-*(GPTs models)