Cohere Command (52B)
The cohere.command
model is deprecated.
The
cohere.command
model supported for the on-demand serving mode is now retired and this model is deprecated for the dedicated serving mode. If you're hosting cohere.command
on a dedicated AI cluster, (dedicated serving mode), you can continue to use this hosted model replica with the summarization and generation API and in the playground until the cohere.command
model retires for the dedicated serving mode. This model, when hosted on a dedicated AI cluster is only available in US Midwest (Chicago). See Retiring the Models for retirement dates and definitions. We recommend that you use the chat models instead which offer the same summarization and text generation capabilities, including control over summary length and style.Available in These Regions
- US Midwest (Chicago)
Key Features
- Model has 52 billion parameters.
- User prompt and response can be up to 4,096 tokens for each run.
- You can fine-tune this model with your dataset.
Dedicated AI Cluster for the Model
In the preceding region list, models in regions that aren't marked with (dedicated AI cluster only) have both on-demand and dedicated AI cluster options. For the on-demand option, you don't need clusters and you can reach the model in the Console playground or through the API.
To reach a model through a dedicated AI cluster in any listed region, you must create an endpoint for that model on a dedicated AI cluster. For the cluster unit size that matches this model, see the following table.
Base Model | Fine-Tuning Cluster | Hosting Cluster | Pricing Page Information | Request Cluster Limit Increase |
---|---|---|---|---|
|
|
|
|
|
Release and Retirement Dates
Model | Release Date | On-Demand Retirement Date | Dedicated Mode Retirement Date |
---|---|---|---|
cohere.command
|
2024-02-07 | 2024-10-02 | 2025-08-07 |
Generation Model Parameters
When using the generation models, you can vary the output by changing the following parameters.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token.
- Temperature
-
The level of randomness used to generate the output text.
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top k
-
A sampling method in which the model chooses the next token randomly from the
top k
most likely tokens. A higher value fork
generates more random output, which makes the output text sound more natural. The default value for k is 0 forcommand
models and -1 forLlama
models, which means that the models should consider all tokens and not use this method. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
p
a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setp
to 1 to consider all tokens. - Stop sequences
-
A sequence of characters—such as a word, a phrase, a newline
(\n)
, or a period—that tells the model when to stop the generated output. If you have more than one stop sequence, then the model stops when it reaches any of those sequences. - Frequency penalty
-
A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output.
- Presence penalty
-
A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used.
- Show likelihoods
-
Every time a new token is to be generated, a number between -15 and 0 is assigned to all tokens, where tokens with higher numbers are more likely to follow the current token. For example, it's more likely that the word favorite is followed by the word food or book rather than the word zebra. This parameter is available only for the
cohere
models.
Summarization Model Parameters
When using a hosted summarization model in the playground, you can get a different output by changing the following parameters.
- Length
-
The approximate length of the summary. You can select short, medium, or long. Short summaries are roughly up to two sentences long, medium summaries are between three and five sentences, and long summaries might have six or more sentences. For the Auto value, the model chooses a length based on the input size.
- Format
-
Whether to display the summary in a free-form paragraph or in bullet points. For the Auto value, the model chooses the best format based on the input text.
- Extractiveness
-
How much to reuse the input in the summary. Summaries with high extractiveness tend to use sentences verbatim, and summaries with low extractiveness tend to paraphrase.
- Temperature
-
The level of randomness used to generate the output text.
Tip
To summarize a text, start with the temperature set to 0. If you don't require random results, we recommend a temperature value of 0.2. Use a higher value if, for example, you plan to select various summaries afterward. However, don't use a high temperature for summarization because a high temperature encourages the model to produce creative text, which might also include hallucinations and factually incorrect information. - Additional command
-
Other summarizing options such as style or focus. Write one or more additional commands in a natural language as instructions to the model, for example, "focus on dates", or "write in a conversational style", or "end the resume with END SUMMARY".