Cohere Embed Multilingual V3
Review performance benchmarks for the cohere.embed-multilingual-v3.0
(Cohere Embed Multilingual V3) model hosted on one Embed Cohere unit of a dedicated AI cluster in OCI
Generative AI.
- See the available regions for this model.
- Learn about matching base models to their dedicated AI clusters.
- Review the metrics.
Embeddings
This scenario applies only to the embedding models. This scenario mimics embedding generation as part of the data ingestion pipeline of a vector database. In this scenario, all requests are the same size, which is 96 documents, each one with 512 tokens. An example would be a collection of large PDF files, each file with 30,000+ words that a user wants to ingest into a vector DB.
Concurrency | Request-level Latency (second) | Request-level Throughput (Request per minute) (RPM) |
---|---|---|
1 | 2.25 | 24 |
8 | 4.33 | 120 |
32 | 14.94 | 144 |
128 | 49.21 | 198 |
Lighter Embeddings
This scenario applies only to the embedding models. This lighter embeddings scenario is similar to the embeddings scenario, except that we reduce the size of each request to 16 documents, each with 512 tokens. Smaller files with fewer words could be supported by this scenario.
Concurrency | Request-level Latency (second) | Request-level Throughput (Request per minute) (RPM) |
---|---|---|
1 | 1.28 | 42 |
8 | 1.38 | 288 |
32 | 3.44 | 497 |
128 | 11.94 | 702 |