Vectrant exists because we spent years inside retail operations. We staffed the support desks. We answered the same questions about store hours, return policies, and delivery timelines hundreds of times a week. We watched talented salespeople spend half their day on calls that could have been handled by a system that actually understood the product catalog.

We looked at the market and saw two options: generic chatbots that know nothing about your products, or enterprise platforms that charge a dollar per conversation and lock your data in their cloud. Neither was acceptable.

So we built Vectrant. A platform that understands retail at every level — from product specifications and inventory to financing terms and warranty policies — and runs entirely on your infrastructure.

The problems we set out to solve

Repetitive questions consume your team

Store hours. Return policies. "Do you carry this in stock?" Your support staff answers the same questions hundreds of times a month. Every one of those interactions costs time, wages, and attention that could go toward complex customer needs.

Generic chatbots don't understand retail

Off-the-shelf bots can't search your product catalog by price, compare specifications, or tell a customer when an out-of-stock item will arrive. They deflect instead of resolve. Customers leave frustrated and call your store anyway.

Per-resolution pricing punishes growth

Intercom charges $0.99 per resolution. Zendesk charges $1.00. At 10,000 conversations a month, that is over $100,000 a year — and the cost only goes up as you grow. The incentive structure is backwards.

Your data leaves your control

SaaS chatbot vendors store your customer conversations, product data, and internal documents on their servers. You are one breach or policy change away from losing control of your most sensitive business information.

Our approach

Vectrant is not a chatbot. It is a customer and business intelligence platform that happens to include a chat interface. The difference matters.

We built it to understand retail the way a senior employee does — knowing the difference between "out of stock" and "arriving Tuesday," understanding that a customer asking about "something to keep food cold" wants a refrigerator, and recognizing that an employee looking up warranty terms needs a different answer than a customer asking about returns.

We made it self-hosted because we believe your customer data belongs on your infrastructure. Not ours. Not anyone's cloud. Every conversation, every document, every query stays within your network perimeter. There is no telemetry, no data sharing, no vendor lock-in.

And we priced it so that growth is rewarded, not penalized. Your LLM costs are approximately $0.02 per conversation — paid directly to your provider of choice. There are no per-resolution fees. At 100,000 annual conversations, you save $97,000 compared to per-resolution competitors.

Platform capabilities
Product Catalog AI
Semantic search across your live product feed with automatic import, description enrichment, and price-aware filtering. Customers ask natural questions like "front-load washers under $800" and receive real product matches with specifications, availability, and comparisons.
Guided Shopping
Admin-configurable decision trees that replicate what a skilled salesperson does on the floor. Walk customers through qualifying questions to narrow a catalog of thousands to the two or three products that match their needs. No code required to build or modify flows.
Knowledge Base
Retrieval-augmented generation across your internal documents. Supports PDF, DOCX, XLSX, PPTX, CSV, Markdown, and plain text. Hybrid semantic and keyword search with automatic chunking, embedding, and incremental reindexing as documents change.
ERP & Inventory Sync
Connects to any ERP system that exports purchase order data. When a product is out of stock, the platform automatically surfaces expected arrival dates and incoming quantities. Customers get specific answers instead of dead ends.
Custom Pipelines
A no-code pipeline builder for creating topic-specific knowledge channels. Financing terms, warranty policies, delivery logistics, internal procedures. Serves both customer-facing and internal audiences through the same infrastructure.
Agent Command Center
Full operational dashboard for support teams. Live takeover of any AI conversation, AI-suggested responses, auto-summaries, canned response templates, internal notes, performance metrics, and workload tracking across the entire team.
Room Visualization
Customers upload a photo of their room and see how a product looks in their space. AI-powered image generation matches perspective, lighting, and scale — and replaces existing items of the same type. Available on desktop and mobile.
Embeddable Widget
One line of JavaScript deploys a chat widget on any website. White-label design with automatic brand color synchronization and mobile responsiveness. Product cards, room visualization, and full AI capabilities built in.
Enterprise Security
JWT authentication with four-tier RBAC. TOTP two-factor authentication for all staff accounts. PII detection and redaction, input sanitization, tiered rate limiting, TLS encryption, and comprehensive audit logging.

Why self-hosted?

Your data stays on your servers

Customer conversations, product catalogs, internal documents, employee interactions. Everything stays within your network perimeter. There is no data replication to external servers, no shared infrastructure, no multi-tenant risk.

No vendor lock-in

Switch LLM providers with a configuration change. Export your data at any time. The platform runs on standard Docker infrastructure that you control. If you ever stop using Vectrant, your data is already yours.

Compliance on your terms

Healthcare, finance, government — if your industry has data residency requirements, self-hosted deployment satisfies them by default. Your compliance team audits your infrastructure, not a third party's.

Predictable costs

No surprise invoices based on conversation volume. You pay your LLM provider directly for token usage and control your own compute costs. Growth improves your unit economics instead of increasing your vendor bill.

Technical infrastructure
Vectrant is a complete system, not a wrapper around an API. Every component is containerized, health-checked, and designed for production reliability.
9
Docker services
228
API endpoints
8
Automated tasks
23
API modules
Supported LLM providers
Anthropic Claude OpenAI GPT Google Gemini Mistral Llama (local) Ollama (local) Any OpenAI-compatible API
Deployment

Vectrant deploys as a self-contained Docker stack on infrastructure you own. Cloud instances, on-premise servers, or hybrid environments. The entire platform runs within your network perimeter.

There are no per-conversation fees. You pay your LLM provider directly for token usage and control your own compute costs. At approximately $0.02 per conversation, the same volume that costs $99,000 annually with per-resolution chatbot vendors costs under $2,000 with Vectrant.