See what
you are paying for.
Every LLM call Vectrant makes — chat, classifiers, overnight reviews, intelligence briefings, marketing playbooks — is logged with token counts, model used, and computed cost. The orchestrator-billing dashboard shows total cost, per-source breakdown, and a daily-rate projection. No hidden fees, no surprise overages, no proprietary metric.

Every token, accounted for.
A unified usage logger writes one row per LLM call to a dedicated table. The dashboard reads from that table and renders cost in real time.
Per-source attribution
Every call is tagged with its source: chat, classifier, overnight_review, briefing, marketing_playbook, tagging, demographic_inference, summarizer, advisor, and so on. The dashboard shows what each subsystem is actually costing — so you know whether the spend matches the value.
Real cost, not estimates
Token counts come from the model API response, not from your input. Cost is computed at the model's actual published rate, with cache hits counted separately. Background work and chat work are summed; the bill is the bill.
No tenant-side overrides
Pricing is platform-controlled. The customer admin cannot zero out the bill or change the value-signal weights. The integrity of the ROI dashboard and the invoice are guaranteed in code, not in policy.
Usage Metering, running on your data.
30-minute walkthrough on your real catalog. No slideware, no demo data.