Spendwall
FeaturesPricingIntegrationsBlogDashboard DemoLog inGet started
Demo
FeaturesPricingIntegrationsBlogDashboard DemoLog inGet started

AI cost glossary

Answer-first definitions that make AI spend vocabulary concrete enough for budget decisions.

What is Token burn rate?

Token burn rate is the speed at which a product, team, or agent workflow consumes tokens over time.

What is Cost per accepted run?

Cost per accepted run is the spend required to produce one output a human or system actually accepts.

What is Provider-aware monitoring?

Provider-aware monitoring is cost monitoring that respects what each provider actually exposes.

What is Prompt caching?

Prompt caching is reuse of repeated prompt context so eligible workloads can reduce repeated token cost.