Generate vector embeddings for text
/v1/embeddings endpoint generates vector embeddings (numerical representations) for a provided set of input texts. These embeddings are optimized for semantic search, classification, clustering, and other machine learning tasks.
Authorization header with your Heroku Inference API key:
EMBEDDING_KEY config variable (assuming you created the model resource with an --as EMBEDDING flag).
"cohere-embed-multilingual"
96 strings2048 characters each512 tokens per stringsearch_document, search_query, classification, clustering
Example: "search_document" for indexing documents, "search_query" for search queries
"raw"
Determines the encoding format of the output.
Options: raw, base64
"float"
Specifies the type(s) of embeddings to return.
Options: float, int8, uint8, binary, ubinary
false
Ignore unsupported parameters in request instead of throwing an error.
"list".
Embedding Object
"embedding"index (integer): Index of the input string this embedding corresponds to (starting from 0)embedding (array or string): The embedding vector of the specified embedding_typeprompt_tokens (integer): Tokens in the inputtotal_tokens (integer): Total tokens usedinput_type: "search_document" when embedding documents for your search index, and input_type: "search_query" when embedding user queries.
input_type: "classification" when creating embeddings for text classification tasks.
input_type: "clustering" when grouping similar texts together.
Bearer token using your INFERENCE_KEY