Discussions

Ask a Question

How to get 2048 byte embedding for voyage-3.5 model?

Hi, I'm using Typescript library to get voyage-3.5 embedding. Despite the documentation contains description of output_dimension parameetr of a request, Typescript library does not support it. method embed() takes a VoyageAI.EmbedRequest type object which contains only input, model, inputType, truncation, and encodingFormat properties. There is neither output_dimension, nor output_dtype. Even if I try to create a request of any type and include output_dimension: 2048 in it, the embed method still returns 1024 embedding vector. How can I request a different embedding size?

Is Voyage multimodal embedding api supports sinhala language?

I'm on a big, high accuracy needed project. so I want to know that voyage multimodal embedding api will do it's job with best performance for sinhala language

I'm getting different embeddings for the SAME encoding

Here is my code

Is it normal for API responses to take 3 - 5 seconds

We're currently using OpenAI and we're used to couple hundred millisecond response times. Benchmarking voyage and recall is definitely improved but it comes at the cost of 10X slower embeddings? Am I doing something wrong.

Checking token count on typescript

I see reference to count_tokens in python. What about typescript or just a REST api is fine

Support for Arabic Language

Hi guys,

Input Token

If tokens exceed a certain length, the interface response will time out. the service cannot be used at all.

Checking Input Token Size Before API Call

I want to verify whether my input token size exceeds the model’s limit. Is there a built-in function to calculate token length without making an API call and incurring costs? If using tiktoken is the only option, which encoding should I use?

Healthcare-specific models

You previously mentioned healthcare-specific embedding models. Any plans on making one available soon? Thank you!