Architecture

aiconn is five products that work together. You can use each independently, but they’re designed to compose.

Products

ProductRoleWhere it runs
MeridianEdge agent runtime, user-facingCloudflare Workers
CortexCompute-intensive agent frameworkYour servers / VMs
EngramPersistent memory with hybrid searchYour infrastructure
AgentShieldMCP schema security scannerCI / dev tooling

The Brain + Muscle Pattern

Meridian handles the conversation — it’s the interface between the user and your AI logic. Cortex handles compute-heavy work that can’t or shouldn’t run on edge workers.

User
  │
  ▼
Meridian (Cloudflare Workers)
  │  stateful conversation, tool dispatch, egress policy
  │
  ├──▶ Engram (memory lookup / store)
  │
  └──▶ Cortex via MCP (heavy computation, specialized tools)
         │
         └──▶ Your databases, APIs, ML models

Meridian is the brain: it decides what to do. Cortex is the muscle: it does the heavy lifting.

Why split them?

Cloudflare Workers have a 128 MB memory limit and a 30 second CPU limit. Most conversational AI fits comfortably within these. But when you need to:

…you dispatch to Cortex over MCP and wait for the result.

Data flow example

A user asks: “Summarize everything I said about project X last month.”

  1. Meridian receives the message
  2. Meridian calls the memory_search tool → queries Engram
  3. Engram returns relevant past messages
  4. Meridian dispatches to Cortex via MCP: summarize_documents(docs)
  5. Cortex processes the documents and returns a summary
  6. Meridian synthesizes the final response and stores it in Engram

AgentShield

Before you deploy, AgentShield scans your MCP tool schemas for security issues: overly broad permissions, missing input validation, prompt injection vectors. Run it in CI.

bunx @aiconnai/agentshield scan ./mcp-tools/

See Also