Discussions

Ask a Question

Is it normal for API responses to take 3 - 5 seconds

We're currently using OpenAI and we're used to couple hundred millisecond response times. Benchmarking voyage and recall is definitely improved but it comes at the cost of 10X slower embeddings? Am I doing something wrong.

Can I get a refund for my credit?

Dear Support Team,

Checking token count on typescript

I see reference to count_tokens in python. What about typescript or just a REST api is fine

Support for Arabic Language

Hi guys,

Input Token

If tokens exceed a certain length, the interface response will time out. the service cannot be used at all.

Checking Input Token Size Before API Call

I want to verify whether my input token size exceeds the model’s limit. Is there a built-in function to calculate token length without making an API call and incurring costs? If using tiktoken is the only option, which encoding should I use?

Healthcare-specific models

You previously mentioned healthcare-specific embedding models. Any plans on making one available soon? Thank you!

response become slow if request frequently

in my test, when i send 60 ~ 70 requests in 5 seconds, the response time will gradually change from 1.5s to more than 20 seconds. I would like to ask whether there is a situation where the processing rate is degraded.

Nordic languages support

Hello, I would like to know if any of Voyage emedding models support Swedish and Finnish languages. If there are some comparisons with cohere multilingual - would be really interesting to know the results!