System Overview

The Mergent-AI System is designed as a modular, lightweight AI pipeline with full offline operability. It combines adaptive inference routing, modular intelligence, memory feedback systems, and multi-phase symbolic processing layers. All modules?including the LLM, metrics engine, and memory logic?run locally on standard CPU+RAM devices with high efficiency.

System Modules & Layers

Synexis Architecture

A recursive symbolic processor for inference restructuring, realm detection (logical, emotional, creative), and rule-based routing.

Formula5

Field calculator generating DAAISF dx/dy values, complexity, and anomalies to modulate LLM response parameters on each query.

MergentCore

Main orchestration engine responsible for routing input/output, managing fallback states, and applying dynamic parameters to LLMs.

Memory System

Real-time memory injection from both transient and recursive states using multi-level context buffers (including What-When-Where extraction).

Offline Integration

Offline

Fully functional without internet using locally stored LLM models (like TinyLlama-1.1B-Chat). Average memory usage under 500MB.

Platform Compatibility

Runs on Linux, Android (via Pydroid), Windows, and Replit with CPU+RAM. No GPU required. Scalable for edge deployment.

Return to Landing Page