{
"id": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{
"index": 123,
"message": {
"role": "<string>",
"content": "<string>",
"reasoning_content": "<string>",
"tool_calls": [
{
"function": {
"name": "<string>",
"arguments": "<string>"
},
"id": "<string>",
"type": "function"
}
],
"tool_call_id": "<string>"
},
"finish_reason": "<string>",
"logprobs": {
"tokens": [
"<string>"
],
"token_logprobs": [
123
],
"top_logprobs": [
{}
],
"text_offset": [
123
],
"token_ids": [
123
]
},
"raw_output": {
"prompt_fragments": [
"<string>"
],
"prompt_token_ids": [
123
],
"completion": "<string>",
"completion_token_ids": [
123
],
"completion_logprobs": {
"content": [
{
"token": "<string>",
"logprob": 123,
"sampling_logprob": 123,
"bytes": [
123
],
"token_id": 123,
"text_offset": 123,
"top_logprobs": [
{
"token": "<string>",
"logprob": 123,
"token_id": 123,
"bytes": [
123
]
}
],
"last_activation": "<string>",
"routing_matrix": "<string>",
"extra_tokens": [
123
],
"extra_logprobs": [
123
]
}
]
}
},
"token_ids": [
123
]
}
],
"object": "chat.completion",
"usage": {
"prompt_tokens": 123,
"total_tokens": 123,
"completion_tokens": 123,
"prompt_tokens_details": {
"cached_tokens": 123
}
},
"perf_metrics": {},
"prompt_token_ids": [
123
]
}Create a completion for the provided prompt and parameters.
{
"id": "<string>",
"created": 123,
"model": "<string>",
"choices": [
{
"index": 123,
"message": {
"role": "<string>",
"content": "<string>",
"reasoning_content": "<string>",
"tool_calls": [
{
"function": {
"name": "<string>",
"arguments": "<string>"
},
"id": "<string>",
"type": "function"
}
],
"tool_call_id": "<string>"
},
"finish_reason": "<string>",
"logprobs": {
"tokens": [
"<string>"
],
"token_logprobs": [
123
],
"top_logprobs": [
{}
],
"text_offset": [
123
],
"token_ids": [
123
]
},
"raw_output": {
"prompt_fragments": [
"<string>"
],
"prompt_token_ids": [
123
],
"completion": "<string>",
"completion_token_ids": [
123
],
"completion_logprobs": {
"content": [
{
"token": "<string>",
"logprob": 123,
"sampling_logprob": 123,
"bytes": [
123
],
"token_id": 123,
"text_offset": 123,
"top_logprobs": [
{
"token": "<string>",
"logprob": 123,
"token_id": 123,
"bytes": [
123
]
}
],
"last_activation": "<string>",
"routing_matrix": "<string>",
"extra_tokens": [
123
],
"extra_logprobs": [
123
]
}
]
}
},
"token_ids": [
123
]
}
],
"object": "chat.completion",
"usage": {
"prompt_tokens": 123,
"total_tokens": 123,
"completion_tokens": 123,
"prompt_tokens_details": {
"cached_tokens": 123
}
},
"perf_metrics": {},
"prompt_token_ids": [
123
]
}Bearer authentication using your Fireworks API key. Format: Bearer <API_KEY>
The name of the model to use.
Example: "accounts/fireworks/models/kimi-k2-instruct-0905"
A list of messages comprising the conversation so far.
Show child attributes
The role of the messages author. One of system, user, or assistant.
The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.
The reasoning or thinking process generated by the model. This field is only available for certain reasoning models (GLM 4.5, GLM 4.5 Air, GPT OSS 120B, GPT OSS 20B) and contains the model's internal reasoning that would otherwise appear in <think></think> tags within the content field.
The tool calls generated by the model, such as function calls.
Show child attributes
The function that the model called.
Show child attributes
The name of the function to call.
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
The ID of the tool call.
The type of the tool. Currently, only function is supported.
A list of tools the model may call. Currently, only functions are supported as a tool.
Use this to provide a list of functions the model may generate JSON inputs for.
See the our model library for the list of supported models
Show child attributes
The type of the tool. Currently, only function is supported.
function Required for function tools.
Show child attributes
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the function does, used by the model to choose when and how to call the function.
The parameters the function accepts, described as a JSON Schema object.
The JSON Schema object should have the following structure:
{
"type": "object",
"required": ["param1", "param2"],
"properties": {
"param1": {
"type": "string",
"description": "..."
},
"param2": {
"type": "number",
"description": "..."
}
}
}type field must be "object".required field is an array of strings indicating which parameters are required.properties field is a map of property names to their definitions, where each property is an object with type (string) and description (string) fields.To describe a function that accepts no parameters, provide the value:
{"type": "object", "properties": {}}Controls which (if any) tool is called by the model.
none: the model will not call any tool and instead generates a message.auto: the model can pick between generating a message or calling one or more tools.required (alias: any): the model must call one or more tools.
To force a specific function, pass an object of the form { "type": "function", "name": "my_function" } or { "type": "function", "function": { "name": "my_function" } } for OpenAI compatibility.auto, none, any, required Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
Allows to force the model to produce specific output format.
Setting to { "type": "json_object" } enables JSON mode, which guarantees the message the model generates is valid JSON.
If "type" is "json_schema", a JSON schema must be provided. E.g., response_format = {"type": "json_schema", "json_schema": <json_schema>}.
Important: when using JSON mode, it's crucial to also instruct the model to produce JSON via a system or user message. Without this, the model may generate an unending stream of whitespace until the generation reaches the token limit, resulting in a long-running and seemingly "stuck" request.
Also note that the message content may be partially cut off if finish_reason="length", which indicates the generation exceeded max_tokens or the conversation exceeded the max context length. In this case the return value might not be a valid JSON.
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or top_p but not both.
Required range: 0 <= x <= 2
Example: 1
Top-k sampling is another sampling method where the k most probable next tokens are filtered and the probability mass is redistributed among only those k next tokens. The value of k controls the number of candidates for the next token at each step during text generation. Must be between 0 and 100.
Required range: 0 <= x <= 100
Example: 50
A unique identifier representing your end-user, which can help monitor and detect abuse.
Maximum length of the prompt to cache.
Isolation key for prompt caching to separate cache entries.
Return raw output from the model.
Whether to include performance metrics in the response body.
Non-streaming requests: Performance metrics are always included in response headers (e.g., fireworks-prompt-tokens, fireworks-server-time-to-first-token). Setting this to true additionally includes the same metrics in the response body under the perf_metrics field.
Streaming requests: Performance metrics are only included in the response body under the perf_metrics field in the final chunk (when finish_reason is set). This is because headers may not be accessible during streaming.
The response body perf_metrics field contains the following metrics:
Basic Metrics (all deployments):
prompt-tokens: Number of tokens in the promptserver-time-to-first-token: Time from request start to first token (in seconds)server-processing-time: Total processing time (in seconds, only for completed requests)Predicted Outputs Metrics:
speculation-prompt-tokens: Number of speculative prompt tokensspeculation-prompt-matched-tokens: Number of matched speculative prompt tokens (for completed requests)Dedicated Deployment Only Metrics:
speculation-generated-tokens: Number of speculative generated tokens (for completed requests)speculation-acceptance: Speculation acceptance rates by positioncached-prompt-tokens: Number of cached prompt tokensbackend-host: Hostname of the backend servernum-concurrent-requests: Number of concurrent requestsdeployment: Deployment nametokenizer-queue-duration: Time spent in tokenizer queuetokenizer-duration: Time spent in tokenizerprefill-queue-duration: Time spent in prefill queueprefill-duration: Time spent in prefillgeneration-queue-duration: Time spent in generation queueHow many completions to generate for each prompt.
Note: Because this parameter generates many completions, it can quickly consume your token quota. Use carefully and ensure that you have reasonable settings for max_tokens and stop.
Required range: 1 <= x <= 128
Example: 1
Up to 4 sequences where the API will stop generating further tokens. The returned text will NOT contain the stop sequence.
The maximum number of tokens to generate in the completion. If the token count of your prompt plus max_tokens exceeds the model's context length, the behavior depends on context_length_exceeded_behavior. By default, max_tokens will be lowered to fit in the context window instead of returning an error.
Alias for max_tokens. Cannot be specified together with max_tokens.
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or temperature but not both.
Required range: 0 <= x <= 1
Example: 1
Minimum probability threshold for token selection. Only tokens with probability >= min_p are considered for selection. This is an alternative to top_p and top_k sampling.
Required range: 0 <= x <= 1
Typical-p sampling is an alternative to nucleus sampling. It considers the most typical tokens whose cumulative probability is at most typical_p.
Required range: 0 <= x <= 1
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
Reasonable value is around 0.1 to 1 if the aim is to just reduce repetitive samples somewhat. If the aim is to strongly suppress repetition, then one can increase the coefficients up to 2, but this can noticeably degrade the quality of samples. Negative values can be used to increase the likelihood of repetition.
See also presence_penalty for penalizing tokens that have at least one appearance at a fixed rate.
OpenAI compatible (follows OpenAI's conventions for handling token frequency and repetition penalties).
Required range: -2 <= x <= 2
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
Reasonable value is around 0.1 to 1 if the aim is to just reduce repetitive samples somewhat. If the aim is to strongly suppress repetition, then one can increase the coefficients up to 2, but this can noticeably degrade the quality of samples. Negative values can be used to increase the likelihood of repetition.
See also frequency_penalty for penalizing tokens at an increasing rate depending on how often they appear.
OpenAI compatible (follows OpenAI's conventions for handling token frequency and repetition penalties).
Required range: -2 <= x <= 2
Applies a penalty to repeated tokens to discourage or encourage repetition. A value of 1.0 means no penalty, allowing free repetition. Values above 1.0 penalize repetition, reducing the likelihood of repeating tokens. Values between 0.0 and 1.0 reward repetition, increasing the chance of repeated tokens. For a good balance, a value of 1.2 is often recommended. Note that the penalty is applied to both the generated output and the prompt in decoder-only models.
Required range: 0 <= x <= 2
Defines the target perplexity for the Mirostat algorithm. Perplexity measures the unpredictability of the generated text, with higher values encouraging more diverse and creative outputs, while lower values prioritize predictability and coherence. The algorithm dynamically adjusts the token selection to maintain this target during text generation.
If not specified, Mirostat sampling is disabled.
Specifies the learning rate for the Mirostat sampling algorithm, which controls how quickly the model adjusts its token distribution to maintain the target perplexity. A smaller value slows down the adjustments, leading to more stable but gradual shifts, while higher values speed up corrections at the cost of potential instability.
Random seed for deterministic sampling.
Include log probabilities in the response. This accepts either a boolean or an integer:
If set to true, log probabilities are included and the number of alternatives can be controlled via top_logprobs (OpenAI-compatible behavior).
If set to an integer N (0-5), include log probabilities for up to N most likely tokens per position in the legacy format.
The API will always return the logprob of the sampled token, so there may be up to logprobs+1 elements in the response when an integer is used. The maximum value for the integer form is 5.
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. The minimum value is 0 and the maximum value is 5.
When logprobs is set, top_logprobs can be used to modify how many top log probabilities are returned. If top_logprobs is not set, the API will return up to logprobs tokens per position.
Required range: 0 <= x <= 5
Echo back the prompt in addition to the completion.
Echo back the last N tokens of the prompt in addition to the completion. This is useful for obtaining logprobs of the prompt suffix but without transferring too much data. Passing echo_last=len(prompt) is the same as echo=True
This setting controls whether the model should ignore the End of Sequence (EOS) token. When set to True, the model will continue generating tokens even after the EOS token is produced. By default, it stops when the EOS token is reached.
What to do if the token count of prompt plus max_tokens exceeds the model's context window.
Passing truncate limits the max_tokens to at most context_window_length - prompt_length. This is the default.
Passing error would trigger a request error.
The default of 'truncate' is selected as it allows to ask for high max_tokens value while respecting the context window length without having to do client-side prompt tokenization.
Note, that it differs from OpenAI's behavior that matches that of error.
error, truncate Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling.
Show child attributes
Speculative decoding prompt or token IDs to speed up generation.
Applicable to reasoning models only, this option controls the reasoning token length, and can be set to either 'none', 'low', 'medium', 'high' or an integer. 'low', 'medium' and 'high' correspond to progressively higher thinking effort and thus longer reasoning tokens. 'none' means disable thinking. You can alternatively set the option to an integer controlling the hard-cutoff for reasoning token length (this is not entirely OpenAI compatible, you might have to use fireworks.ai client library to bypass the schema check). Note: For OpenAI GPT OSS models, only the string values ('low', 'medium', 'high') are supported. Integer values will not work with these models.
low, medium, high, none Return token IDs alongside text to avoid retokenization drift.
Deprecated in OpenAI. Use 'tools' instead. This will be automatically transformed to tools.
Show child attributes
The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64.
A description of what the function does, used by the model to choose when and how to call the function.
The parameters the function accepts, described as a JSON Schema object.
The JSON Schema object should have the following structure:
{
"type": "object",
"required": ["param1", "param2"],
"properties": {
"param1": {
"type": "string",
"description": "..."
},
"param2": {
"type": "number",
"description": "..."
}
}
}type field must be "object".required field is an array of strings indicating which parameters are required.properties field is a map of property names to their definitions, where each property is an object with type (string) and description (string) fields.To describe a function that accepts no parameters, provide the value:
{"type": "object", "properties": {}}The size (in tokens) to which to truncate chat prompts. This includes the system prompt (if any), previous user/assistant messages, and the current user message. Earlier user/assistant messages will be evicted first to fit the prompt into this length. The system prompt is preserved whenever possible and only truncated as a last resort.
This should usually be set to a number much smaller << than the model's maximum context size, to allow enough remaining tokens for generating a response.
If omitted, you may receive "prompt too long" errors in your responses as conversations grow. Note that even with this set, you may still receive "prompt too long" errors if individual messages (such as a very long system prompt or user message) exceed the model's context window on their own.
Enable parallel function calling.
Deprecated in OpenAI. Use 'tool_choice' instead. This will be automatically transformed to tool_choice.
auto, none Successful Response
The response message from a /v1/chat/completions call.
A unique identifier of the response
The Unix time in seconds when the response was generated
The model used for the chat completion
The list of chat completion choices
Show child attributes
A chat completion message.
Show child attributes
The role of the messages author. One of system, user, or assistant.
The contents of the message. content is required for all messages, and may be null for assistant messages with function calls.
The reasoning or thinking process generated by the model. This field is only available for certain reasoning models (GLM 4.5, GLM 4.5 Air, GPT OSS 120B, GPT OSS 20B) and contains the model's internal reasoning that would otherwise appear in <think></think> tags within the content field.
The tool calls generated by the model, such as function calls.
Show child attributes
The function that the model called.
Show child attributes
The name of the function to call.
The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function.
The ID of the tool call.
The type of the tool. Currently, only function is supported.
Legacy log probabilities format
Extension of OpenAI that returns low-level interaction of what the model sees, including the formatted prompt and function calls
Show child attributes
Pieces of the prompt (like individual messages) before truncation and concatenation. Depending on prompt_truncate_len some of the messages might be dropped. Contains a mix of strings to be tokenized and individual tokens (if dictated by the conversation template)
Fully processed prompt as seen by the model
Raw completion produced by the model before any tool calls are parsed
Token IDs for the raw completion
Log probabilities for the completion. Only populated if logprobs is specified in the request
Show child attributes
Show child attributes
Show child attributes
The object type, which is always "chat.completion"
Usage statistics.
Show child attributes
The number of tokens in the prompt
The total number of tokens used in the request (prompt + completion)
The number of tokens in the generated completion
See parameter perf_metrics_in_response
Token IDs for the prompt (when return_token_ids=true)
Was this page helpful?