LLM cost estimation
Learn how Openlayer estimates the costs associated with your LLM calls
Openlayer automatically estimates the costs associated with your LLM calls when you use one of the streamlined approaches to trace your AI system.
This page provides information on how that estimation works for each LLM provider.
Providers
OpenAI
OpenAI
Openlayer maintains a version of the OpenAI pricing page to estimate the cost from the prompt and completion tokens of your LLM calls.
Azure OpenAI
Azure OpenAI
The cost of Azure OpenAI calls depends on the underlying OpenAI model behind your model deployment.
If you follow the convention of using OpenAI model names as your model
deployment name, removing all .
, Openlayer will estimate the cost of your
calls using the OpenAI pricing page. This means
that if your model deployment name is gpt-35-turbo
, Openlayer will use the
cost for gpt-3.5-turbo
from OpenAI.
LangChain
LangChain
Openlayer uses the information on OpenAI, Anthropic, Vertex AI, and Mistral AI to estimate the cost from the prompt and completion tokens of your LLM calls.
Reach out if you use a different provider via LangChain, and we will include its price estimates as well.
Anthropic
Anthropic
Openlayer maintains a version of the Anthropic API pricing page to estimate the cost from the prompt and completion tokens of your LLM calls.
Mistral AI
Mistral AI
Openlayer maintains a version of the Mistral AI pricing page to estimate the cost from the prompt and completion tokens of your LLM calls.
OpenAI Assistants API
OpenAI Assistants API
Openlayer maintains a version of the OpenAI pricing page to estimate the cost from the prompt and completion tokens of your LLM calls.
For the assistant API, this means that Openlayer estimates the generation cost but not tool use cost.
Openlayer strives to maintain an up to date list of model prices. Reach
out if you notice a cost estimate of $0
or an
innacurate estimate. Make sure to mention the LLM provider and model name
used.