Input cost
Input cost estimates what you pay for the prompt text and context sent to the model.
Cost calculator
Estimate how much an AI prompt could cost based on prompt size, expected answer length, pricing profile, and usage volume.
Input cost estimates what you pay for the prompt text and context sent to the model.
Output cost estimates what you pay for the model’s generated answer, which may be priced separately.
PromptMeter combines estimated input and output tokens with your selected example pricing profile.
Usage volume fields help translate one prompt run into daily and monthly cost estimates.
Pricing profiles are examples only. They are not official provider prices, and you should verify real prices before relying on estimates.
Calculator
Paste a prompt, choose an example pricing profile, and estimate cost per prompt run, per day, and per month.
Input tokens are what you send to the AI model. Output tokens are what the model returns. API providers often price them separately.
Prices are manual for now. Example: if your provider charges $2 input and $10 output per 1M tokens, enter 2 and 10.
Energy usage is a rough estimate. Actual energy depends on model, hardware, provider, datacenter efficiency, workload, and region.
FAQ
PromptMeter estimates standard input tokens, adds expected output tokens, applies price per 1M tokens, and scales by usage volume.
No. The profiles are examples to make the calculator useful quickly. Enter your provider’s real prices in advanced settings.
Shorter prompts, fewer repeated calls, lower output length, and cleaner workflows can reduce token usage and estimated cost.