Meta Llama 3.3 (70B)
The meta.llama-3.3-70b-instruct model is available for on-demand inferencing, dedicated hosting, and fine-tuning, and delivers better performance than Llama 3.1 70B and Llama 3.2 90B for text tasks.
Regions for this Model
For supported regions, endpoint types (on-demand or dedicated AI clusters), and hosting (OCI Generative AI or external calls) for this model, see the Models by Region page. For details about the regions, see the Generative AI Regions page.
Access this Model
The API endpoints for all supported commercial, sovereign, and government regions are listed in the Management API and Inference API links. You can access each model only through its supported regions.
Key Features
- Model has 70 billion parameters.
- Accepts text-only inputs and produces text-only outputs.
- Uses the same prompt format as Llama 3.1 70B.
- Supports the same code interpreter as Llama 3.1 70B and retains the 128,000 token context length. (Maximum prompt + response length: 128,000 tokens for each run.)
- Compared to its Llama 3.1 70B predecessor, responds with improved reasoning, coding, math, and instruction-following. See the Llama 3.3 model card.
- Available for on-demand inferencing, dedicated hosting, and fine-tuning.
- For on-demand inferencing, the response length is capped at 4,000 tokens for each run.
- For the dedicated mode, the response length isn't capped off and the context length is 128,000 tokens.
Meta Llama 3.3 Variants
The Meta Llama 3.3 (70B) model is offered in two variants: the standard meta.llama-3.3-70b-instruct and the optimized meta.llama-3.3-70b-instruct-fp8-dynamic (dynamic FP8 version). Except for a few regions, both variants are offered in the same regions. Availability varies by region and mode (on-demand or dedicated AI clusters). See Models by Region for the complete list and full details.
- Standard Variant:
meta.llama-3.3-70b-instruct -
- Performance: Provides full-precision performance.
- Fine-tuning: You can fine-tune this model with your dataset in commercial (OC1) regions. Fine tuning isn't supported for the models in OC4 an OC19 regions.
- When to use: Best for general-purpose tasks that require high accuracy, such as complex reasoning, content generation, and any use case where fine-tuning is needed.
- Dynamic FP8 Variant:
meta.llama-3.3-70b-instruct-fp8-dynamic -
- Performance: Uses FP8 (8-bit floating point), a reduced-precision numerical format that represents floating-point numbers using 8 bits to speed up inference. Compared to 16-bit formats such as FP16, FP8 halves memory bandwidth requirements, which can increase computational throughput and reduce GPU power consumption.
- Efficiency: Optimized for efficiency, this variant offers faster inference with minimal accuracy loss for many tasks.
- Fine-tuning: Not available.
- When to use: Select this variant for high-volume, latency-sensitive scenarios such as real-time applications, large-scale serving, or cost-optimized inference where speed and efficiency matter more than fine-tuning or maximum precision. This variant is best for production environments focused on throughput rather than customization.
For API requests, always specify the exact model ID.
On-Demand Mode
See the following table for this model's on-demand product name on the pricing page.
| Model Name | OCI Model Name | Pricing Page Product Name |
|---|---|---|
| Meta Llama 3.3 (70B) (Standard) | meta.llama-3.3-70b-instruct |
Large Meta |
| Meta Llama 3.3 (70B) (Dynamic FP8) | meta.llama-3.3-70b-instruct-fp8-dynamic |
Large Meta |
Learn about On-Demand Mode.
Dedicated AI Cluster for the Model
For models in on-demand mode, no clusters are required. Access them through the Console playground and API. For models available in the dedicated mode, use endpoints created on dedicated AI clusters. Learn about the Dedicated Mode.
The following table lists hardware unit sizes and service limits for dedicated AI clusters.
| Base Model | Fine-Tuning Cluster | Hosting Cluster | Pricing Page Information | Request Cluster Limit Increase |
|---|---|---|---|---|
|
|
|
|
|
|
Not available for fine-tuning |
For UAE East (Dubai):
For other available regions:
|
|
|
- If you don't have enough cluster limits in the tenancy for hosting the Meta Llama 3.3 (70B) (standard or dynamic fp8) model on a dedicated AI cluster, request the
dedicated-unit-llama2-70-countlimit to increase by 2. - For fine-tuning, request the
dedicated-unit-llama2-70-countlimit to increase by 4.
Endpoint Rules for Clusters
- A dedicated AI cluster can hold up to 50 endpoints.
- Use these endpoints to create aliases that all point either to the same base model or to the same version of a custom model, but not both types.
- Several endpoints for the same model make it easy to assign them to different users or purposes.
| Hosting Cluster Unit Size | Endpoint Rules |
|---|---|
Large Generic for meta.llama-3.3-70b-instruct |
|
Large Generic for meta.llama-3.3-70b-instruct-fp8-dynamic |
|
LARGE_GENERIC_V1 for meta.llama-3.3-70b-instruct-fp8-dynamic (UAE East (Dubai) only) |
|
-
To increase the call volume supported by a hosting cluster, increase its instance count by editing the dedicated AI cluster. See Updating a Dedicated AI Cluster.
-
For more than 50 endpoints per cluster, request an increase for the limit,
endpoint-per-dedicated-unit-count. See Requesting a Service Limit Increase and Service Limits for Generative AI.
Cluster Performance Benchmarks
Review the Meta Llama 3.3 (70B) cluster performance benchmarks for different use cases.
OCI Release and Retirement Dates
For release and retirement dates and replacement model options, see the following pages based on the mode (on-demand or dedicated):
Model Parameters
To change the model responses, you can change the values of the following parameters in the playground or the API.
- Maximum output tokens
-
The maximum number of tokens that you want the model to generate for each response. Estimate four characters per token. Because you're prompting a chat model, the response depends on the prompt and each response doesn't necessarily use up the maximum allocated tokens.
- Temperature
-
The level of randomness used to generate the output text.
Tip
Start with the temperature set to 0 or less than one, and increase the temperature as you regenerate the prompts for a more creative output. High temperatures can introduce hallucinations and factually incorrect information. - Top p
-
A sampling method that controls the cumulative probability of the top tokens to consider for the next token. Assign
pa decimal number between 0 and 1 for the probability. For example, enter 0.75 for the top 75 percent to be considered. Setpto 1 to consider all tokens. - Top k
-
A sampling method in which the model chooses the next token randomly from the
top kmost likely tokens. A high value forkgenerates more random output, which makes the output text sound more natural. The default value for k is 0 forCohere Commandmodels and -1 forMeta Llamamodels, which means that the model should consider all tokens and not use this method. - Frequency penalty
-
A penalty that's assigned to a token when that token appears frequently. High penalties encourage fewer repeated tokens and produce a more random output.
For the Meta Llama family models, this penalty can be positive or negative. Positive numbers encourage the model to use new tokens and negative numbers encourage the model to repeat the tokens. Set to 0 to disable.
- Presence penalty
-
A penalty that's assigned to each token when it appears in the output to encourage generating outputs with tokens that haven't been used.
- Seed
-
A parameter that makes a best effort to sample tokens deterministically. When this parameter is assigned a value, the large language model aims to return the same result for repeated requests when you assign the same seed and parameters for the requests.
Allowed values are integers and assigning a large or a small seed value doesn't affect the result. Assigning a number for the seed parameter is similar to tagging the request with a number. The large language model aims to generate the same set of tokens for the same integer in consecutive requests. This feature is especially useful for debugging and testing. The seed parameter has no maximum value for the API, and in the Console, its maximum value is 9999. Leaving the seed value blank in the Console, or null in the API disables this feature.
Warning
The seed parameter might not produce the same result in the long-run, because the model updates in the OCI Generative AI service might invalidate the seed.