Why do Voyage embeddings have superior quality?

Embedding models, much like generative models, rely on powerful neural network (and often transformer-based) architecture to capture and compress semantic context. And, much like generative models, they’re incredibly hard to train. We are a team of leading AI researchers who had experience in training embedding models for 5+ years. We make all the components right, from model architecture and data collection to selecting suitable loss functions and optimizers. Please see our blog post for more details.


What embedding models are available, and which one should I use?

Currently, we offer voyage-large-2&voyage-2 - powerful generalist embedding models, voyage-code-2 - our most advanced embedding model optimized for code retrieval, and voyage-law-2 - our embedding model customized for the legal domain.

  • voyage-large-2 is recommended for generalist tasks which have complicated documents or high quality requirements.
  • voyage-2 is recommended for other generalist tasks, especially those with higher demand of throughput.
  • voyage-code-2 is recommended for code-related tasks.
  • voyage-law-2 is recommended for retrieval tasks in the legal domain.

Other embedding models such as voyage-finance-2, voyage-multilingual-2 will come Q2 2024.

Which similarity function should I use?

You can use Voyage embeddings with either dot-product similarity, cosine similarity, or Euclidean distance. An explanation about embedding similarity can be found here.

Voyage AI embeddings are normalized to length 1, which means that:

  • Cosine similarity is equivalent to dot-product similarity, while the latter can be computed more quickly.
  • Cosine similarity and Euclidean distance will result in the identical rankings.

What is the relationship between characters, words, and tokens?

Please see this page.


How do I get the Voyage API key?

Upon creating an account, we instantly generate an API key for you. Once signed in, access your API key by clicking the "Create new API key" button in the dashboard.

What are the rate limits for the Voyage API?

Please see the rate limit guide.

How can I retrieve nearest text quickly if I have a large corpus?

To efficiently retrieve the nearest texts from a sizable corpus, you can use a vector database. Here are some common choices:

  • Pinecone, a fully managed vector database
  • Zilliz, a vector database for enterprise
  • Chroma, an open-source embeddings store
  • Elasticsearch, a popular search/analytics engine and vector database
  • Milvus, a vector database built for scalable similarity search
  • Qdrant, a vector search engine
  • Weaviate, an open source, AI-native vector database


When will I receive the bill?

The first 50 million tokens are free for every account, and subsequent usage is priced on a per-token basis.

You can add payment methods to your account in the dashboard. We will bill monthly. You can expect a credit card charge around the 2rd of each month for the usage of the past month.


Is fine-tuning available?

Currently we offer fine-tuned embeddings through subscription. Please email Tengyu Ma (CEO) at [email protected] if you are interested.

How to contact us?

Please email us at [email protected] for inquiries and customer support.

How to get updates from Voyage?

Follow us on twitter and/or linkedin for more updates!

To subscribe to our newsletter, feel free to send us an email at [email protected].