Discussions
How to get 2048 byte embedding for voyage-3.5 model?
Hi, I'm using Typescript library to get voyage-3.5 embedding. Despite the documentation contains description of output_dimension parameetr of a request, Typescript library does not support it. method embed() takes a VoyageAI.EmbedRequest type object which contains only input, model, inputType, truncation, and encodingFormat properties. There is neither output_dimension, nor output_dtype. Even if I try to create a request of any type and include output_dimension: 2048 in it, the embed method still returns 1024 embedding vector. How can I request a different embedding size?
too many concurrent streams
Running into too many concurrent streams
exception -- I take that is a bit different than requests per minute or maximum tokens being exceeded. How many concurrent streams are allowed?
Is Voyage multimodal embedding api supports sinhala language?
I'm on a big, high accuracy needed project. so I want to know that voyage multimodal embedding api will do it's job with best performance for sinhala language
Is rerank-3 coming soon?
Lots of new stuff in the 3rd generation of embeddings like voyage-3-large, 3.5 series and contextual embeddings.
Multimodal Reranker
Is a multimodal reranker in the works?
I'm getting different embeddings for the SAME encoding
Here is my code
Is it normal for API responses to take 3 - 5 seconds
We're currently using OpenAI and we're used to couple hundred millisecond response times. Benchmarking voyage and recall is definitely improved but it comes at the cost of 10X slower embeddings? Am I doing something wrong.
Checking token count on typescript
I see reference to count_tokens in python. What about typescript or just a REST api is fine
status code 404: badresponse status code 404
Using the voyage-code-3 model, the API address is https://api.voyageai.com/v1/embeddings. An error occurred.
Support for Arabic Language
Hi guys,