Official documentation: https://platform.openai.com/docs/api-reference/completions
Given a prompt, the model will return one or more predicted completions, and can also return the probabilities of alternative tokens at each position.
Request Parameters
ID of the model to use. You can use the List models API to see all available models.
The prompt to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
The maximum number of tokens to generate in the completion.
Sampling temperature between 0 and 2. Higher values make output more random, lower values make it more focused.
Nucleus sampling parameter.
Number of completions to generate for each prompt.
Whether to stream back partial progress.
Include the log probabilities of the most likely tokens. Maximum value is 5.
Echo back the prompt in addition to the completion.
Up to 4 sequences where the API will stop generating further tokens.
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text.
Generates best_of completions server-side and returns the “best” one.
Response
Unique identifier for the completion.
Object type, which is text_completion.
Unix timestamp of when the completion was created.
The model used for completion.
List of completion choices.
Usage statistics for the completion request.
curl -X POST https://api.example.com/v1/completions \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo-instruct",
"prompt": "Hello,",
"max_tokens": 30,
"temperature": 0
}'
{
"id": "cmpl-ByvHP6AWeB1L5vWZSPNHsB12sU9db",
"object": "text_completion",
"created": 1753859563,
"model": "gpt-3.5-turbo-instruct",
"choices": [
{
"index": 0,
"logprobs": null,
"finish_reason": "length",
"text": "I am an AI assistant. How can I help you today?"
}
],
"usage": {
"prompt_tokens": 3,
"completion_tokens": 30,
"total_tokens": 33
}
}