1. OpenClaw Architecture for Multi-User
How It Works
OpenClaw is an AI Gateway that receives messages from channels (Telegram, WhatsApp, Discord, etc.) and routes them to agents. Key abstractions:
Session Model
OpenClaw supports several session.dmScope:
| Mode | Session Key Format | Use Case |
|---|---|---|
main (default) |
agent:<agentId>:main |
All DMs in one session (personal assistant) |
per-peer |
agent:<agentId>:dm:<peerId> |
Isolation by sender |
per-channel-peer |
agent:<agentId>:<channel>:dm:<peerId> |
Isolation by channel + sender |
per-account-channel-peer |
agent:<agentId>:<channel>:<accountId>:dm:<peerId> |
Full isolation (multi-account) |
per-channel-peer mode — each Telegram user gets an isolated session with their own history.Multi-Agent Routing
You can run multiple agents in a single Gateway:
{
agents: {
list: [
{ id: "protech", workspace: "~/protech-workspace", default: true },
{ id: "admin", workspace: "~/.openclaw/workspace" },
],
},
bindings: [
// Fedor → admin agent (full access)
{ agentId: "admin", match: { channel: "telegram",
peer: { kind: "direct", id: "224153441" } } },
// All others → protech agent (restricted)
{ agentId: "protech", match: { channel: "telegram" } },
],
}
This means: Fedor talks to the full-featured Jarvis, while all other users interact with a restricted ProTechBot agent.
Lane-Aware Queue
Messages are processed through a FIFO queue with a concurrency cap:
- • Main lane: default concurrency = 4
- • Subagent lane: default concurrency = 8
- • Configurable:
agents.defaults.maxConcurrent
A bottleneck was discovered with multiple bots/agents — all messages pass through a shared queue serially. Delays of 1-2 minutes with simultaneous requests to different agents. This is a fresh bug that may be fixed.
2. Hardware Requirements
Mac Mini M4 Pro (24GB RAM, 512GB SSD)
OpenClaw Gateway is a Node.js process. Main resource consumption:
| Component | RAM | CPU | Notes |
|---|---|---|---|
| Gateway process | ~200-400 MB | Minimal | Event loop, routing |
| Session (in-memory context) | ~5-20 MB | — | Depends on context length |
| Session store (disk) | ~50-500 KB/сессию | — | JSONL файлы |
| Docker sandbox (if enabled) | ~100-300 MB/контейнер | Variable | Exec isolation |
| LLM API calls | 0 MB (remote) | 0 CPU | Вся нагрузка на провайдере |
Scenario Calculations
Important: OpenClaw does NOT run LLMs locally — it sends requests to API providers. The main load is therefore network + session state.
| Scenario | Concurrent Sessions | RAM | Can Handle? |
|---|---|---|---|
| 10 users | 2-3 concurrent | ~1 GB | ✅ Easily |
| 50 users | 5-10 concurrent | ~2 GB | ✅ Yes |
| 100 users | 10-20 concurrent | ~3-4 GB | ✅ Yes |
| 500 users | 30-50 concurrent | ~5-8 GB | ⚠️ With caveats |
maxConcurrent to 10-20. Each LLM request takes 3-30 seconds (depends on model/provider).Вердикт: Mac Mini M4 Pro 24GB потянет 500 users по ресурсам, но bottleneck будет в concurrency обработки и скорости LLM API.
3. Session Management
Context Storage
Sessions are stored on disk:
~/.openclaw/agents/<agentId>/sessions/sessions.json — индекс ~/.openclaw/agents/<agentId>/sessions/*.jsonl — полная история
Format is JSONL (JSON Lines): each turn (user/assistant/tool) is a separate line. Files are never modified, only appended to (append-only).
Memory per User
OpenClaw has two memory levels:
contextTokens (default 200K). Automatic pruning of old tool results.
В текущей архитектуре OpenClaw workspace общий для всех сессий одного агента. Это значит: все 500 users ProTechBot делят один workspace, нет встроенного механизма «папка per user», агент может случайно прочитать/записать данные другого юзера.
Solutions for Personal Memory
users/{peerId}/ in the shared workspace. The agent receives system prompt instructions to read files from there. ⚠️ No isolation guarantee.Option B + strict system prompt. This is sufficient for an MVP.
~/protech-workspace/
users/
tg_123456789/
profile.md
history.md
tg_987654321/
profile.md
history.md
shared/
knowledge-base/
4. Shared Knowledge Base + Personal Memory
RAG Architecture on OpenClaw
OpenClaw doesn't have built-in RAG. But there are several approaches:
read tool. System prompt: "For video production questions — read files from knowledge/". Simple, works, easy to update. Downside: doesn't scale for large knowledge bases (100+ documents).
5. Security and Isolation
Prompt Injection Protection
OpenClaw has multi-layered protection:
allow: ["read", "web_search", "session_status"], deny: ["write", "edit", "exec", "process", "browser", "nodes"]
sandbox.mode: "all", scope: "session" — отдельный контейнер per session, workspaceAccess: "ro"
allowFrom: whitelist user IDs. dmPolicy: "open": все могут писать (для публичного бота).
dmScope: "per-channel-peer" isolates sessions
User-to-User Isolation
| Aspect | Isolation Level | Details |
|---|---|---|
| Session history | ✅ Full | dmScope per-channel-peer |
| Workspace files | ⚠️ Partial | Shared workspace, isolation via prompt |
| Tools execution | ✅ Full (sandbox) | Docker per-session |
| LLM context | ✅ Full | Each session = separate API call |
| Logs on disk | ⚠️ Admin accessible | JSONL per-session, but same OS user |
Risks
read tool or sandbox with workspaceAccess: "none"
exec + sandbox mode "all"
dmScope: main all users share context. Protection: per-channel-peer
1. dmScope: "per-channel-peer" — обязательно
2. sandbox.mode: "all" — обязательно
3. tools.deny: ["exec", "process", "write", "edit", "browser", "nodes"] — обязательно
4. Оставить только: read (для knowledge base), web_search, session_status
6. Admin Access
How the Admin Manages the Bot
admin agent with full access. Everyone else goes to protech. The admin agent can: read logs of all sessions, send messages to users, spawn sub-agents for analytics, manage configuration.
openclaw sessions — список сессий, openclaw sessions --active 60 — активные за час, openclaw agent --session-id <id> --message "..."
http://127.0.0.1:18789/. View sessions, logs, configuration. Requires gateway token.
Monitoring
# Логи сессий ~/.openclaw/agents/<agentId>/sessions/*.jsonl # Gateway логи openclaw logs --tail # Security audit openclaw security audit --deep # Health check openclaw health
7. Scaling
Current Limitations
| Limitation | Value | Configurable? |
|---|---|---|
| Main lane concurrency | 4 (default) | ✅ maxConcurrent |
| Subagent concurrency | 8 (default) | ✅ |
| Queue cap | 20 messages | ✅ messages.queue.cap |
| Context tokens | 200K (default) | ✅ contextTokens |
| Gateway instances | 1 per host | ❌ (single process) |
Scaling Paths
maxConcurrent to 10-15. Use fast models (Gemini Flash, DeepSeek V3.2). Optimize prompts (shorter = faster). Session reset on idle (30-60 min).8. Cost Model
LLM API Costs per User
Average ProTechBot user: ~5-20 messages/day, ~500-2000 tokens per turn.
| Model | Input $/1M tok | Output $/1M tok | Cost/user/day | 100 users/мес | 500 users/мес |
|---|---|---|---|---|---|
| DeepSeek V3.2 | $0.028 | $0.28 | ~$0.005 | ~$15 | ~$75 |
| Gemini 3 Flash | $0.10 | $0.40 | ~$0.002 | ~$6 | ~$30 |
| Gemini Flash (free tier) | $0 | $0 | $0 | $0 | ⚠️ Rate limits |
| GPT-5.3 mini | $0.15 | $0.60 | ~$0.003 | ~$9 | ~$45 |
| Claude Haiku 4 | $0.80 | $4.00 | ~$0.015 | ~$45 | ~$225 |
| Claude Sonnet 4.5 | $3.00 | $15.00 | ~$0.06 | ~$180 | ~$900 |
| Qwen 3.5 72B (free/OR) | $0 | $0 | $0 | $0 | ⚠️ Rate limits |
MVP/тест: Gemini 3 Flash или DeepSeek V3.2 (~$30-75/мес на 500 users)
Production: DeepSeek V3.2 as primary + Gemini Flash as fallback
Premium tier: Claude Haiku for paid users
Additional Costs
| Item | Cost/month |
|---|---|
| VPS (Hetzner CAX31) | ~€15-40 |
| Mac Mini electricity | ~$5-10 |
| Domain/SSL | ~$1-2 |
| Monitoring | $0 (self-hosted) |
| Total (excl. LLM) | ~$20-50 |
9. Comparison with Alternatives
| Criterion | OpenClaw | aiogram + Redis | Botpress | Rasa | Custom Node.js |
|---|---|---|---|---|---|
| Setup complexity | Medium | Low | Low (cloud) | High | High |
| Multi-user sessions | ✅ Built-in | ✅ Redis sessions | ✅ Built-in | ✅ Built-in | 🔧 Manual |
| Per-user memory | ⚠️ Via prompt | ✅ Redis/DB | ✅ Built-in | ⚠️ Limited | 🔧 Any DB |
| RAG | ⚠️ Via skill | 🔧 Manual | ✅ Built-in | ⚠️ Limited | 🔧 Any |
| Prompt injection | ✅ Tool policy + sandbox | ❌ Manual | ✅ Built-in | ✅ Built-in | ❌ Manual |
| Admin panel | ✅ Control UI + CLI | ❌ Needs building | ✅ Web UI | ✅ Web UI | ❌ Needs building |
| Scaling >500 | ⚠️ Difficult | ✅ Horizontally | ✅ Cloud-native | ✅ Kubernetes | ✅ Horizontally |
| Cost (infra) | $0 (self-hosted) | $5-20/мес | $500+/мес | $0 (self-hosted) | $5-20/мес |
| LLM flexibility | ✅ Any provider | ✅ Любой API | ⚠️ Limited | ⚠️ Own models | ✅ Любой API |
| Tool execution | ✅ Shell, browser | ❌ Code only | ⚠️ Actions | ⚠️ Custom actions | 🔧 Any |
| Telegram integration | ✅ Native channel | ✅ aiogram (лучший) | ✅ Connector | ⚠️ REST webhook | 🔧 Bot API |
| Sub-agents | ✅ Built-in | ❌ No | ⚠️ Workflows | ❌ No | 🔧 Manual |
| Maturity | ⚠️ Rapidly evolving | ✅ Stable | ✅ Production | ✅ Production | — |
Detailed Breakdown
10. Real-World Community Experience
GitHub Issues (OpenClaw)
Key Community Takeaways
11. Verdict and Recommendations
Can OpenClaw Be Used for ProTechBot?
Short answer: YES, but with caveats.
Pros of OpenClaw for ProTechBot
Cons of OpenClaw for ProTechBot
Recommended Architecture: Hybrid Approach
Конфигурация для чистого OpenClaw (MVP до 50 users)
{
session: {
dmScope: "per-channel-peer",
reset: { mode: "idle", idleMinutes: 120 },
},
agents: {
defaults: {
model: { primary: "deepseek/deepseek-chat" },
maxConcurrent: 10,
contextTokens: 100000,
},
list: [
{
id: "protech",
default: true,
workspace: "~/protech-bot/workspace",
sandbox: { mode: "all", scope: "session", workspaceAccess: "ro" },
tools: {
allow: ["read", "web_search", "session_status"],
deny: ["write", "edit", "exec", "process", "browser",
"nodes", "canvas"],
},
},
{
id: "admin",
workspace: "~/.openclaw/workspace",
},
],
},
bindings: [
{ agentId: "admin", match: { channel: "telegram",
peer: { kind: "direct", id: "224153441" } } },
],
channels: {
telegram: {
dmPolicy: "open",
},
},
}
Roadmap
| Stage | Users | Stack | Timeline |
|---|---|---|---|
| MVP | 10-50 | Pure OpenClaw | 1-2 weeks |
| v1.0 | 50-200 | OpenClaw + optimization | 1-2 months |
| v2.0 | 200-500 | Hybrid aiogram + OpenClaw | 2-3 months |
| v3.0 | 500+ | aiogram frontend + OpenClaw AI engine | 3-6 months |
Начать с чистого OpenClaw (MVP, 10-50 users). Это быстрее всего в разработке (конфиг + промпт), даёт реальные данные по нагрузке и паттернам использования, использует существующую инфраструктуру (Mac Mini, Джарвис), позволяет понять, нужен ли гибрид или OpenClaw справится.
Если упрёмся в лимиты (concurrency, персонализация, scaling) — мигрировать на гибридную архитектуру. aiogram frontend уже написан (текущий ProTechBot), нужно только добавить OpenClaw как AI backend.
Appendices
A. Useful Configuration Commands
# Настроить dmScope openclaw config set session.dmScope "per-channel-peer" # Открыть DM для всех openclaw config set channels.telegram.dmPolicy "open" # Увеличить concurrency openclaw config set agents.defaults.maxConcurrent 10 # Security audit openclaw security audit --deep # Посмотреть активные сессии openclaw sessions --active 60 # Логи openclaw logs --tail --level info