Cohere Command R+ (Deprecated)
The Command R+ model is optimized for complex tasks, offers advanced language understanding, higher capacity, and more nuanced responses than cohere.command-r-16k
. Also ideal for question-answering, sentiment analysis, and information retrieval.
Available in These Regions
- Brazil East (Sao Paulo)
- Germany Central (Frankfurt)
- UK South (London)
- US Midwest (Chicago)
Key Features
- For dedicated inferencing, create a dedicated AI cluster and endpoint and host the model on the cluster.
- Maximum prompt + response length: 16,000 tokens for each run.
- For on-demand inferencing, the response length is capped at 4,000 tokens for each run.
- Optimized for complex tasks, offers advanced language understanding, higher capacity, and more nuanced responses than
cohere.command-r-16k
. Also ideal for question-answering, sentiment analysis, and information retrieval.
Command R Compared to R+
- Model Size and Performance: Command R is a smaller-scale language model than Command R+. While Command R offers high-quality responses, the responses might not have the same level of sophistication and depth as the Command R+ responses. Command R+ is a larger model, resulting in enhanced performance and more sophisticated understandings.
- Use Cases: Command R is suited for various applications, including text generation, summarization, translation, and text-based classification. It's an ideal choice for building conversational AI agents and chat-based applications. Command R+, on the other hand, is designed for more complex language tasks that require deeper understanding and nuance, such as text generation, question-answering, sentiment analysis, and information retrieval.
- Capacity and Scalability: Command R can handle a moderate number of concurrent users compared to Command R+. Command R+, however, is designed to handle a higher volume of requests and support more complex use cases, which might result in higher prices because of its increased capacity and performance.
In summary, Command R is an excellent choice for those looking for a more affordable and flexible option for general language tasks. On the other hand, Command R+ is designed for power users who require advanced language understanding, higher capacity, and more nuanced responses. The choice between the two would depend on the specific requirements and budget of your application.
Dedicated AI Cluster for the Model
In the preceding region list, models in regions that aren't marked with (dedicated AI cluster only) have both on-demand and dedicated AI cluster options. For the on-demand option, you don't need clusters and you can reach the model in the Console playground or through the API.
To reach a model through a dedicated AI cluster in any listed region, you must create an endpoint for that model on a dedicated AI cluster. For the cluster unit size that matches this model, see the following table.
Base Model | Fine-Tuning Cluster | Hosting Cluster | Pricing Page Information | Request Cluster Limit Increase |
---|---|---|---|---|
|
Not available for fine-tuning |
|
|
|
-
If you don't have enough cluster limits in your tenancy for hosting the Cohere Command R+ model (deprecated) on a dedicated AI cluster, request the limit
dedicated-unit-large-cohere-count
to increase by 2. - Review the Cohere Command R+ cluster performance benchmarks for different use cases.
Release and Retirement Dates
Model | Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
---|---|---|---|
cohere.command-r-plus
|
2024-06-18 | 2025-01-16 | 2025-08-07 |
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens.
- Preamble override
-
An initial context or guiding message for a chat model. When you don't give a preamble to a chat model, the default preamble for that model is used. You can assign a preamble in the Preamble override parameter, for the models. The default preamble for the Cohere family is:
You are Command. You are an extremely capable large language model built by Cohere. You are given instructions programmatically via an API that you follow to the best of your ability.
Overriding the default preamble is optional. When specified, the preamble override replaces the default Cohere preamble. When adding a preamble, for best results, give the model context, instructions, and a conversation style.
Tip
For chat models without the preamble override parameter, you can include a preamble in the chat conversation and directly ask the model to answer in a certain way. - Safety Mode
- Adds a safety instruction for the model to use when generating responses. Options are:
- Contextual: (Default) Puts fewer constraints on the output. It maintains core protections by aiming to reject harmful or illegal suggestions, but it allows profanity and some toxic content, sexually explicit and violent content, and content that contains medical, financial, or legal information. Contextual mode is suited for entertainment, creative, or academic use.
- Strict: Aims to avoid sensitive topics, such as violent or sexual acts and profanity. This mode aims to provide a safer experience by prohibiting responses or recommendations that it finds inappropriate. Strict mode is suited for corporate use, such as for corporate communications and customer service.
- Off: No safety mode is applied.
- Temperature
-
The level of randomness used to generate the output text.
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
p
a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setp
to 1 to consider all tokens. - Top k
-
A sampling method in which the model chooses the next token randomly from the
top k
most likely tokens. A high value fork
generates more random output, which makes the output text sound more natural. The default value for k is 0 forCohere Command
models and -1 forMeta Llama
models, which means that the model should consider all tokens and not use this method. - Frequency penalty
-
A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output.
For the Meta Llama family models, this penalty can be positive or negative. Positive numbers encourage the model to use new tokens and negative numbers encourage the model to repeat the tokens. Set to 0 to disable.
- Presence penalty
-
A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used.
- Seed
-
A parameter that makes a best effort to sample tokens deterministically. When this parameter is assigned a value, the large language model aims to return the same result for repeated requests when you assign the same seed and parameters for the requests.
Allowed values are integers and assigning a large or a small seed value doesn't affect the result. Assigning a number for the seed parameter is similar to tagging the request with a number. The large language model aims to generate the same set of tokens for the same integer in consecutive requests. This feature is especially useful for debugging and testing. The seed parameter has no maximum value for the API, and in the Console, its maximum value is 9999. Leaving the seed value blank in the Console, or null in the API disables this feature.
Warning
The seed parameter might not produce the same result in the long-run, because the model updates in the OCI Generative AI service might invalidate the seed.