curl --request POST \
--url https://api.fireworks.ai/inference/v1/embeddings \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"input": "The quick brown fox jumped over the lazy dog",
"model": "nomic-ai/nomic-embed-text-v1.5",
"prompt_template": "Embed this text: {text}",
"dimensions": 768,
"return_logits": [
0,
1,
2
],
"normalize": false
}
'{
"data": [
{
"index": 123,
"embedding": [
123
],
"object": "embedding"
}
],
"model": "<string>",
"object": "list",
"usage": {
"prompt_tokens": 123,
"total_tokens": 123
}
}curl --request POST \
--url https://api.fireworks.ai/inference/v1/embeddings \
--header 'Authorization: Bearer <token>' \
--header 'Content-Type: application/json' \
--data '
{
"input": "The quick brown fox jumped over the lazy dog",
"model": "nomic-ai/nomic-embed-text-v1.5",
"prompt_template": "Embed this text: {text}",
"dimensions": 768,
"return_logits": [
0,
1,
2
],
"normalize": false
}
'{
"data": [
{
"index": 123,
"embedding": [
123
],
"object": "embedding"
}
],
"model": "<string>",
"object": "list",
"usage": {
"prompt_tokens": 123,
"total_tokens": 123
}
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
Input text to embed, encoded as a string. To embed multiple inputs in a single request, pass an array of strings. You can pass structured object(s) to use along with the prompt_template. The input must not exceed the max input tokens for the model (8192 tokens for nomic-ai/nomic-embed-text-v1.5), cannot be an empty string, and any array must be 2048 dimensions or less.
"The quick brown fox jumped over the lazy dog"
The model to use for generating embeddings.
"nomic-ai/nomic-embed-text-v1.5"
Template string for processing input data before embedding. When provided, fields from the input object are substituted using Jinja2. For example, simple substitution is done using {field_name} syntax. The resulting string(s) are then embedded. For array inputs, each object generates a separate string.
Additionally, we expose truncate_tokens(string) function to the template that allows to truncate the string based on token lengths instead of characters
"Embed this text: {text}"
The number of dimensions the resulting output embeddings should have. Only supported in nomic-ai/nomic-embed-text-v1.5 and later models.
x >= 1768
If provided, returns raw model logits (pre-softmax scores) for specified token or class indices. If an empty list is provided, returns logits for all available tokens/classes. Otherwise, only the specified indices are returned.
When used with normalize=true, softmax is applied to create probability distributions. Softmax is applied only to the selected tokens, so output probabilities will always add up to 1.
[0, 1, 2]
Controls normalization of the output. When return_logits is not provided, embeddings are L2 normalized (unit vectors). When return_logits is provided, softmax is applied to the selected logits to create probability distributions.
false
OK
The list of embeddings generated by the model.
Show child attributes
The index of the embedding in the list of embeddings.
The embedding vector, which is a list of floats. The length of vector depends on the model as listed in the embedding guide.
The object type, which is always "embedding".
embedding The name of the model used to generate the embedding.
The object type, which is always "list".
list Was this page helpful?