xAI Grok 4 (New)

The xai.grok-4 model has better performance than its predecessor, Grok 3, and excels at enterprise use cases such as data extraction, coding, and summarizing text. This model has a deep domain knowledge in finance, healthcare, law, and science.

Available in This Region

  • US Midwest (Chicago) (on-demand only)
Important

Cross-Region Calls

When a user enters an inference request to this model in a listed region such as Chicago, Generative AI service in Chicago makes a request to this model hosted in Salt Lake City, and returns the model's response back to Chicago where the user's inference request came from. See Pretrained Models with Cross-Region Calls.

Key Features

  • Model name in OCI Generative AI: xai.grok-4
  • Available On-Demand: Access this model on-demand, through the Console playground or the API.
  • Multimodal support: Input text and images and get a text output.
  • Knowledge: Has a deep domain knowledge in finance, healthcare, law, and science.
  • Context Length: 128,000 tokens (maximum prompt + response length is 128,000 tokens for each run). In the playground, the response length is capped at 16,000 tokens for each run.
  • Excels at These Use Cases: Data extraction, coding, and summarizing text
  • Function Calling: Yes, through the API.
  • Structured Outputs: Yes.
  • Has Reasoning:Yes. For reasoning problems increase the maximum output tokens. See Model Parameters.
  • Knowledge Cutoff: November 2024

Limits

Image Inputs
  • In the Console, input a .png or .jpg image of 5 MB or less.
  • The minimum number of tokens allowed per image is 512 and the maximum number of tokens allowed per image is 1,792. For API, input a base64 encoded image in each run that results in less than 1,792 tokens. For example, a 512 x 512 image is converted to about 1,610 tokens.

On-Demand Mode

You can reach the pretrained foundational models in Generative AI through two modes: on-demand and dedicated. Here are key features for the on-demand mode:
  • You pay as you go for each inference call when you use the models in the playground or when you call the models through the API.

  • Low barrier to start using Generative AI.
  • Great for experimenting, proof of concepts, and evaluating the models.
  • Available for the pretrained models in regions not listed as (dedicated AI cluster only).
Tip

To ensure reliable access to Generative AI models in the on-demand mode, we recommend implementing a back-off strategy, which involves delaying requests after a rejection. Without one, repeated rapid requests can lead to further rejections over time, increased latency, and potential temporary blocking of client by the Generative AI service. By using a back-off strategy, such as an exponential back-off strategy, you can distribute requests more evenly, reduce load, and improve retry success, following industry best practices and enhancing the overall stability and performance of your integration to the service.

Note

The Grok models are available only in the on-demand mode.

See the following table for this model's product name in the pricing page.

Model Name OCI Model Name Pricing Page Product Name
xAI Grok 4 xai.grok-4 xAI – Grok 4

Release Date

Model General Availability Release Date On-Demand Retirement Date Dedicated Mode Retirement Date
xai.grok-4 2025-07-23 Tentative This model isn't available for the dedicated mode.
Important

For a list of all model time lines and retirement details, see Retiring the Models.

Model Parameters

To change the model responses, you can change the values of the following parameters in the playground or the API.

Maximum output tokens

The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens. The maximum prompt + output length is 128,000 tokens for each run. In the playground, the maximum output tokens is capped at 16,000 tokens for each run.

Tip

For large inputs with difficult problems, set a high value for the maximum output tokens parameter. See Troubleshooting.
Temperature

The level of randomness used to generate the output text. Min: 0, Max: 2

Tip

Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information.
Top p

A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign p a decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Set p to 1 to consider all tokens.

Note

The xai.grok-4 model has reasoning, but doesn't support the reasoning_effort parameter used in the Grok 3 mini and Grok 3 mini fast models. If you specify the reasoning_effort parameter in the API for the xai.grok-4 model, you get an error response.

Troubleshooting

Issue: The Grok 4 model doesn't respond.

Cause: The Maximum output tokens parameter in the playground or the max_tokens parameter in the SDK is likely too low.

Action: Increase the maximum output tokens parameter.

Reason: For difficult problems that require reasoning and problem-solving, and for large sophisticated inputs, the xai.grok-4 model tends to think and consumes many tokens, so if the max_tokens parameter is too low, the model uses the allocated tokens and doesn't return a final response.