E119
Element 119
AI Infrastructure Demo
AI Infrastructure · Demo + On-Prem Path

A working AI ecosystem
built for Element 119

Two demonstration systems showing a practical migration path for Element 119: today's live demo uses OpenRouter with meta-llama/llama-3.1-8b-instructfor inference, while a self-hosted Appwrite instance handles functions, persistence, and analytics so provider API keys stay server-side. With the right GPU hardware, the same agent layer can move to self-hosted Ollama on-prem.

OpenRouter · llama-3.1-8b-instructSelf-Hosted AppwriteServer-Side FunctionsRAG PipelineMulti-Agent RoutingOn-Prem Ollama Path

Coating Advisor

Demo 01

A RAG-backed AI agent trained on Element 119's product catalog. Describe your substrate, environment, and requirements — the agent retrieves relevant products and responds with specs and recommendations.

Keyword-scored RAG retrieval — only relevant products injected into context
Real-time token streaming via OpenRouter for the live demo
Per-response token count, compression ratio, and latency
Sessions and analytics handled through self-hosted Appwrite
Open Advisor

Agent Router

Demo 02

The same foundation extended with a two-stage classifier that routes every query to a specialized department agent — Sales, Military/DoD, Installer Support, or Custom Formulations.

Stage 1: keyword classifier fires instantly with zero latency
Stage 2: LLM classifier uses OpenRouter through self-hosted Appwrite
Each department has a distinct agent persona and knowledge domain
Routing decision visible in UI — department, confidence, reasoning, latency
Open Router

Infrastructure Architecture — The live demo runs with OpenRouter for inference and self-hosted Appwrite for server-side functions, analytics, and secret handling. The same routing layer is designed to move to Ollama on dedicated on-prem GPU hardware when local inference is justified.

View Architecture →