Two demonstration systems showing a practical migration path for Element 119: today's live demo uses OpenRouter with meta-llama/llama-3.1-8b-instructfor inference, while a self-hosted Appwrite instance handles functions, persistence, and analytics so provider API keys stay server-side. With the right GPU hardware, the same agent layer can move to self-hosted Ollama on-prem.
A RAG-backed AI agent trained on Element 119's product catalog. Describe your substrate, environment, and requirements — the agent retrieves relevant products and responds with specs and recommendations.
The same foundation extended with a two-stage classifier that routes every query to a specialized department agent — Sales, Military/DoD, Installer Support, or Custom Formulations.
Infrastructure Architecture — The live demo runs with OpenRouter for inference and self-hosted Appwrite for server-side functions, analytics, and secret handling. The same routing layer is designed to move to Ollama on dedicated on-prem GPU hardware when local inference is justified.
View Architecture →