Research • AI Infrastructure

OpenClaw as a Multi-User Telegram Bot Engine

Deep dive: architecture, session management, security, scaling, and cost model for 10–500+ users

500+
users — target
$30
LLM/mo for 500 users
4
isolation levels
11
analysis sections
OpenClaw as a multi-user AI bot engine

1. OpenClaw Architecture for Multi-User

How It Works

OpenClaw is an AI Gateway that receives messages from channels (Telegram, WhatsApp, Discord, etc.) and routes them to agents. Key abstractions:

🤖
Agent An isolated entity with its own workspace, agentDir, sessions, model, and toolset
💬
Session A conversation context tied to a specific user/group/channel
🔀
Bindings Routing rules for incoming messages to a specific agent
🔒
dmScope DM session isolation mode

Session Model

OpenClaw supports several session.dmScope:

Mode Session Key Format Use Case
main (default) agent:<agentId>:main All DMs in one session (personal assistant)
per-peer agent:<agentId>:dm:<peerId> Isolation by sender
per-channel-peer agent:<agentId>:<channel>:dm:<peerId> Isolation by channel + sender
per-account-channel-peer agent:<agentId>:<channel>:<accountId>:dm:<peerId> Full isolation (multi-account)
💡 Recommendation
For ProTechBot, use per-channel-peer mode — each Telegram user gets an isolated session with their own history.

Multi-Agent Routing

You can run multiple agents in a single Gateway:

{
  agents: {
    list: [
      { id: "protech", workspace: "~/protech-workspace", default: true },
      { id: "admin", workspace: "~/.openclaw/workspace" },
    ],
  },
  bindings: [
    // Fedor → admin agent (full access)
    { agentId: "admin", match: { channel: "telegram",
      peer: { kind: "direct", id: "224153441" } } },
    // All others → protech agent (restricted)
    { agentId: "protech", match: { channel: "telegram" } },
  ],
}

This means: Fedor talks to the full-featured Jarvis, while all other users interact with a restricted ProTechBot agent.

Lane-Aware Queue

Messages are processed through a FIFO queue with a concurrency cap:

⚠️ Critical Issue (Issue #16055, 02/14/2026)

A bottleneck was discovered with multiple bots/agents — all messages pass through a shared queue serially. Delays of 1-2 minutes with simultaneous requests to different agents. This is a fresh bug that may be fixed.

2. Hardware Requirements

Mac Mini M4 Pro (24GB RAM, 512GB SSD)

OpenClaw Gateway is a Node.js process. Main resource consumption:

Component RAM CPU Notes
Gateway process ~200-400 MB Minimal Event loop, routing
Session (in-memory context) ~5-20 MB Depends on context length
Session store (disk) ~50-500 KB/сессию JSONL файлы
Docker sandbox (if enabled) ~100-300 MB/контейнер Variable Exec isolation
LLM API calls 0 MB (remote) 0 CPU Вся нагрузка на провайдере

Scenario Calculations

Important: OpenClaw does NOT run LLMs locally — it sends requests to API providers. The main load is therefore network + session state.

Scenario Concurrent Sessions RAM Can Handle?
10 users 2-3 concurrent ~1 GB ✅ Easily
50 users 5-10 concurrent ~2 GB ✅ Yes
100 users 10-20 concurrent ~3-4 GB ✅ Yes
500 users 30-50 concurrent ~5-8 GB ⚠️ With caveats
💡 Оговорки для 500 users
The bottleneck is NOT RAM, but Gateway concurrency: default main lane = 4 concurrent. With 50 simultaneous requests — queue buildup. Need to tune maxConcurrent to 10-20. Each LLM request takes 3-30 seconds (depends on model/provider).

Вердикт: Mac Mini M4 Pro 24GB потянет 500 users по ресурсам, но bottleneck будет в concurrency обработки и скорости LLM API.

3. Session Management

Context Storage

Sessions are stored on disk:

~/.openclaw/agents/<agentId>/sessions/sessions.json  — индекс
~/.openclaw/agents/<agentId>/sessions/*.jsonl         — полная история

Format is JSONL (JSON Lines): each turn (user/assistant/tool) is a separate line. Files are never modified, only appended to (append-only).

Memory per User

OpenClaw has two memory levels:

💭
Session memory (in-context) Current session history sent to the LLM. Limited by contextTokens (default 200K). Automatic pruning of old tool results.
📁
Workspace memory (file-based) The agent can read/write files in the workspace. This is the primary long-term memory mechanism (MEMORY.md, daily notes, etc.)
⚠️ Problem for Multi-User

В текущей архитектуре OpenClaw workspace общий для всех сессий одного агента. Это значит: все 500 users ProTechBot делят один workspace, нет встроенного механизма «папка per user», агент может случайно прочитать/записать данные другого юзера.

Solutions for Personal Memory

Option A: No Personal Workspace
In-session context only. Session reset once a day. Personal data lost on reset. ❌ Not suitable for "remembering each user".
Option B: File Structure (hack) ✅
Папка users/{peerId}/ in the shared workspace. The agent receives system prompt instructions to read files from there. ⚠️ No isolation guarantee.
Option C: Agent per User (heavy)
Dynamic agent creation for each user. ❌ Doesn't scale: 500 agents = 500 config entries. OpenClaw doesn't support dynamic creation.
Option D: Memory Tool (experimental)
SQLite-backed index with FTS5 search. Per-entity pages, temporal queries. ⚠️ Status: research/experimental, not production-ready.
💚 Recommendation

Option B + strict system prompt. This is sufficient for an MVP.

~/protech-workspace/
  users/
    tg_123456789/
      profile.md
      history.md
    tg_987654321/
      profile.md
      history.md
  shared/
    knowledge-base/

4. Shared Knowledge Base + Personal Memory

RAG Architecture on OpenClaw

OpenClaw doesn't have built-in RAG. But there are several approaches:

📂
Approach 1: File-Based RAG via Workspace Agent reads files via read tool. System prompt: "For video production questions — read files from knowledge/". Simple, works, easy to update. Downside: doesn't scale for large knowledge bases (100+ documents).
🔧
Approach 2: Skill with External RAG Write an OpenClaw skill (bash/Python wrapper). The skill calls an external RAG engine (ChromaDB, Qdrant, etc.). Scalable, real vector search. Downside: needs development and maintenance.
🧪
Approach 3: Memory Tool (when ready) SQLite + FTS5 + optional embeddings. Unified system for shared knowledge + personal memory. Temporal queries, entity summaries. ⚠️ Still experimental.
💡 Recommendation для старта
Approach 1 (file-based) + gradual migration to Approach 2.

5. Security and Isolation

Prompt Injection Protection

OpenClaw has multi-layered protection:

🛡️
1. Tool Policy — restricting available tools allow: ["read", "web_search", "session_status"], deny: ["write", "edit", "exec", "process", "browser", "nodes"]
📦
2. Sandbox — Docker exec isolation sandbox.mode: "all", scope: "session" — отдельный контейнер per session, workspaceAccess: "ro"
🚪
3. DM Policy — access control allowFrom: whitelist user IDs. dmPolicy: "open": все могут писать (для публичного бота).
🔐
4. Session Isolation dmScope: "per-channel-peer" isolates sessions

User-to-User Isolation

Aspect Isolation Level Details
Session history ✅ Full dmScope per-channel-peer
Workspace files ⚠️ Partial Shared workspace, isolation via prompt
Tools execution ✅ Full (sandbox) Docker per-session
LLM context ✅ Full Each session = separate API call
Logs on disk ⚠️ Admin accessible JSONL per-session, but same OS user

Risks

🚨
Prompt injection → file access A user could ask "read file users/tg_other_user/profile.md". Protection: deny read tool or sandbox with workspaceAccess: "none"
🚨
Prompt injection → tool abuse Without sandbox, a user could force the agent to execute a shell command. Protection: deny exec + sandbox mode "all"
🚨
Context leakage With dmScope: main all users share context. Protection: per-channel-peer
💚 Recommendation для публичного бота

1. dmScope: "per-channel-peer" — обязательно
2. sandbox.mode: "all" — обязательно
3. tools.deny: ["exec", "process", "write", "edit", "browser", "nodes"] — обязательно
4. Оставить только: read (для knowledge base), web_search, session_status

6. Admin Access

How the Admin Manages the Bot

Option 1: Two Agents via Bindings (recommended) The admin is bound to the admin agent with full access. Everyone else goes to protech. The admin agent can: read logs of all sessions, send messages to users, spawn sub-agents for analytics, manage configuration.
💻
Option 2: CLI Access openclaw sessions — список сессий, openclaw sessions --active 60 — активные за час, openclaw agent --session-id <id> --message "..."
🌐
Option 3: Control UI Web interface at http://127.0.0.1:18789/. View sessions, logs, configuration. Requires gateway token.

Monitoring

# Логи сессий
~/.openclaw/agents/<agentId>/sessions/*.jsonl

# Gateway логи
openclaw logs --tail

# Security audit
openclaw security audit --deep

# Health check
openclaw health

7. Scaling

Current Limitations

Limitation Value Configurable?
Main lane concurrency 4 (default) maxConcurrent
Subagent concurrency 8 (default)
Queue cap 20 messages messages.queue.cap
Context tokens 200K (default) contextTokens
Gateway instances 1 per host ❌ (single process)

Scaling Paths

Stage 1: Один Mac Mini (10-100 users)
Increase maxConcurrent to 10-15. Use fast models (Gemini Flash, DeepSeek V3.2). Optimize prompts (shorter = faster). Session reset on idle (30-60 min).
Stage 2: Docker на VPS (100-500 users)
Dedicated VPS: 8 vCPU, 32GB RAM, ~€40/mo. Docker Compose with OpenClaw Gateway. More stable uptime than Mac Mini.
Stage 3: Multiple Gateways (500+ users)
Each Gateway serves a subset of users. However: one Telegram bot token = one Gateway webhook. Need a load balancer (tricky with Telegram).
💡 Alternative Approach for 500+
Stop using OpenClaw as the sole engine. OpenClaw = admin/orchestrator, aiogram = user frontend. Communication between them via API.

8. Cost Model

LLM API Costs per User

Average ProTechBot user: ~5-20 messages/day, ~500-2000 tokens per turn.

Model Input $/1M tok Output $/1M tok Cost/user/day 100 users/мес 500 users/мес
DeepSeek V3.2 $0.028 $0.28 ~$0.005 ~$15 ~$75
Gemini 3 Flash $0.10 $0.40 ~$0.002 ~$6 ~$30
Gemini Flash (free tier) $0 $0 $0 $0 ⚠️ Rate limits
GPT-5.3 mini $0.15 $0.60 ~$0.003 ~$9 ~$45
Claude Haiku 4 $0.80 $4.00 ~$0.015 ~$45 ~$225
Claude Sonnet 4.5 $3.00 $15.00 ~$0.06 ~$180 ~$900
Qwen 3.5 72B (free/OR) $0 $0 $0 $0 ⚠️ Rate limits
💚 Recommendation

MVP/тест: Gemini 3 Flash или DeepSeek V3.2 (~$30-75/мес на 500 users)
Production: DeepSeek V3.2 as primary + Gemini Flash as fallback
Premium tier: Claude Haiku for paid users

Additional Costs

Item Cost/month
VPS (Hetzner CAX31) ~€15-40
Mac Mini electricity ~$5-10
Domain/SSL ~$1-2
Monitoring $0 (self-hosted)
Total (excl. LLM) ~$20-50

9. Comparison with Alternatives

Criterion OpenClaw aiogram + Redis Botpress Rasa Custom Node.js
Setup complexity Medium Low Low (cloud) High High
Multi-user sessions ✅ Built-in ✅ Redis sessions ✅ Built-in ✅ Built-in 🔧 Manual
Per-user memory ⚠️ Via prompt ✅ Redis/DB ✅ Built-in ⚠️ Limited 🔧 Any DB
RAG ⚠️ Via skill 🔧 Manual ✅ Built-in ⚠️ Limited 🔧 Any
Prompt injection ✅ Tool policy + sandbox ❌ Manual ✅ Built-in ✅ Built-in ❌ Manual
Admin panel ✅ Control UI + CLI ❌ Needs building ✅ Web UI ✅ Web UI ❌ Needs building
Scaling >500 ⚠️ Difficult ✅ Horizontally ✅ Cloud-native ✅ Kubernetes ✅ Horizontally
Cost (infra) $0 (self-hosted) $5-20/мес $500+/мес $0 (self-hosted) $5-20/мес
LLM flexibility ✅ Any provider ✅ Любой API ⚠️ Limited ⚠️ Own models ✅ Любой API
Tool execution ✅ Shell, browser ❌ Code only ⚠️ Actions ⚠️ Custom actions 🔧 Any
Telegram integration ✅ Native channel ✅ aiogram (лучший) ✅ Connector ⚠️ REST webhook 🔧 Bot API
Sub-agents ✅ Built-in ❌ No ⚠️ Workflows ❌ No 🔧 Manual
Maturity ⚠️ Rapidly evolving ✅ Stable ✅ Production ✅ Production

Detailed Breakdown

🐍
aiogram + Redis + LangChain (current stack) ✅ Полный контроль, лучшая библиотека для Telegram. ✅ Horizontally масштабируется. ❌ No sandbox, tool policy — нужно всё писать руками. ❌ No admin UI, субагентов, memory management.
🤖
Botpress ✅ Visual flow builder, built-in RAG, knowledge base. ❌ Cloud-only for production ($500+/mo). ❌ Limited LLM selection. ❌ Vendor lock-in.
🏗️
Rasa ✅ Open-source, NLU pipeline. ❌ Complex setup, aging approach (intent-based vs LLM-based). ❌ Not designed for LLM-first bots.
⚙️
Custom Node.js ✅ Maximum flexibility. ❌ Everything from scratch: sessions, memory, admin, security. ❌ Months of development.

10. Real-World Community Experience

GitHub Issues (OpenClaw)

🐛
Issue #16055 (02/14/2026): "Message Processing Bottleneck with Multiple Telegram Bots" 5 independent Telegram bots on one Gateway. All messages processed serially through shared queue. 1-2 minute delays with simultaneous requests. Status: Open, under discussion.
💡
Issue #1159: "Feature Request: Parallel Session Processing" Request for parallel session processing. Telegram topics/channels block each other.
📝
Gist (digitalknk): "Running OpenClaw Without Burning Money" Advice: "Treat OpenClaw like infrastructure, not a chatbot." Cheap model as coordinator, expensive models for real work.

Key Community Takeaways

1️⃣
OpenClaw works for multi-user, but it's not its primary use case
2️⃣
OpenClaw's primary use case = personal AI assistant (1 user, many channels)
3️⃣
Multi-user requires careful security + session isolation configuration
4️⃣
Concurrency is the main bottleneck when scaling
5️⃣
Nobody in the community has run OpenClaw with 500+ users (yet)

11. Verdict and Recommendations

Can OpenClaw Be Used for ProTechBot?

Short answer: YES, but with caveats.

Pros of OpenClaw for ProTechBot

Session isolation out of the box — dmScope per-channel-peer
Security — tool policy, sandbox, DM policy
Admin access — CLI, Control UI, sessions API
Sub-agents — можно спавнить для сложных задач
Multi-model — easy to switch/fallback models
Already running — Jarvis is already on OpenClaw, we know the system

Cons of OpenClaw for ProTechBot

Concurrency bottleneck — single-process, limited lane concurrency
No built-in per-user workspace — only a file-based hack
No built-in RAG — needs a skill or external service
Scaling >200 users — terra incognita — nobody has tried
No dynamic agent creation — config = static
No analytics/dashboard for users — needs to be built

Recommended Architecture: Hybrid Approach

┌─────────────┐ ┌──────────────────┐ ┌─────────────┐ │ Telegram │────▶│ aiogram Router │────▶│ OpenClaw │ │ Users │◀────│ (Python) │◀────│ Gateway │ │ (500+) │ │ │ │ (AI Engine) │ └─────────────┘ └──────────────────┘ └─────────────┘ │ │ ┌─────▼─────┐ ┌──────▼──────┐ │ Redis │ │ Workspace │ │ Sessions │ │ + Knowledge │ │ + Queue │ │ Base │ └───────────┘ └─────────────┘
🐍 aiogram Handles
Telegram webhook (fast, scalable). User sessions (Redis). Rate limiting, anti-spam. Simple commands (/start, /help, menu). Message queue to OpenClaw.
🧠 OpenClaw Handles
AI reasoning (через LLM API). Knowledge base queries. Сложные мультистеповые задачи. Admin функции. Sub-agents для analytics.

Конфигурация для чистого OpenClaw (MVP до 50 users)

{
  session: {
    dmScope: "per-channel-peer",
    reset: { mode: "idle", idleMinutes: 120 },
  },
  agents: {
    defaults: {
      model: { primary: "deepseek/deepseek-chat" },
      maxConcurrent: 10,
      contextTokens: 100000,
    },
    list: [
      {
        id: "protech",
        default: true,
        workspace: "~/protech-bot/workspace",
        sandbox: { mode: "all", scope: "session", workspaceAccess: "ro" },
        tools: {
          allow: ["read", "web_search", "session_status"],
          deny: ["write", "edit", "exec", "process", "browser",
                 "nodes", "canvas"],
        },
      },
      {
        id: "admin",
        workspace: "~/.openclaw/workspace",
      },
    ],
  },
  bindings: [
    { agentId: "admin", match: { channel: "telegram",
      peer: { kind: "direct", id: "224153441" } } },
  ],
  channels: {
    telegram: {
      dmPolicy: "open",
    },
  },
}

Roadmap

Stage Users Stack Timeline
MVP 10-50 Pure OpenClaw 1-2 weeks
v1.0 50-200 OpenClaw + optimization 1-2 months
v2.0 200-500 Hybrid aiogram + OpenClaw 2-3 months
v3.0 500+ aiogram frontend + OpenClaw AI engine 3-6 months
💚 Final Recommendation

Начать с чистого OpenClaw (MVP, 10-50 users). Это быстрее всего в разработке (конфиг + промпт), даёт реальные данные по нагрузке и паттернам использования, использует существующую инфраструктуру (Mac Mini, Джарвис), позволяет понять, нужен ли гибрид или OpenClaw справится.

Если упрёмся в лимиты (concurrency, персонализация, scaling) — мигрировать на гибридную архитектуру. aiogram frontend уже написан (текущий ProTechBot), нужно только добавить OpenClaw как AI backend.

Appendices

A. Useful Configuration Commands

# Настроить dmScope
openclaw config set session.dmScope "per-channel-peer"

# Открыть DM для всех
openclaw config set channels.telegram.dmPolicy "open"

# Увеличить concurrency
openclaw config set agents.defaults.maxConcurrent 10

# Security audit
openclaw security audit --deep

# Посмотреть активные сессии
openclaw sessions --active 60

# Логи
openclaw logs --tail --level info

B. References

We use OpenClaw for our AI infrastructure

Try DeathScore →