Discussions
How can online MBA assignment help improve academic performance?
Online MBA assignment help connects students with experienced professionals who understand core business concepts like marketing, finance, strategy, and management. These experts craft well-researched, structured, and plagiarism-free assignments tailored to university guidelines. By using online MBA assignment help, students can gain deeper insights into complex topics, save time for other priorities, and ensure timely submissions. It’s an excellent way to boost grades, enhance learning, and maintain a strong academic record while managing busy MBA schedules.
Language Support in Embedding model
Hi there, can you please list the languages that Voyage AI's embedding model natively supports?
How to get 2048 byte embedding for voyage-3.5 model?
Hi, I'm using Typescript library to get voyage-3.5 embedding. Despite the documentation contains description of output_dimension parameetr of a request, Typescript library does not support it. method embed() takes a VoyageAI.EmbedRequest type object which contains only input, model, inputType, truncation, and encodingFormat properties. There is neither output_dimension, nor output_dtype. Even if I try to create a request of any type and include output_dimension: 2048 in it, the embed method still returns 1024 embedding vector. How can I request a different embedding size?
too many concurrent streams
Running into too many concurrent streams
exception -- I take that is a bit different than requests per minute or maximum tokens being exceeded. How many concurrent streams are allowed?
Is Voyage multimodal embedding api supports sinhala language?
I'm on a big, high accuracy needed project. so I want to know that voyage multimodal embedding api will do it's job with best performance for sinhala language
Is rerank-3 coming soon?
Lots of new stuff in the 3rd generation of embeddings like voyage-3-large, 3.5 series and contextual embeddings.
Multimodal Reranker
Is a multimodal reranker in the works?
I'm getting different embeddings for the SAME encoding
Here is my code
Is it normal for API responses to take 3 - 5 seconds
We're currently using OpenAI and we're used to couple hundred millisecond response times. Benchmarking voyage and recall is definitely improved but it comes at the cost of 10X slower embeddings? Am I doing something wrong.
Checking token count on typescript
I see reference to count_tokens in python. What about typescript or just a REST api is fine