1. "The Wall of Text" — A Chatbot Epidemic
Open any AI bot on Telegram. Ask a simple question — "How do I cancel my subscription?" — and you'll get a 300-word paragraph covering the company's history, three cancellation methods, links to legal documents, and an uplifting farewell message. You wanted one button. You got a dissertation.
This isn't a bug in one particular bot. It's a systemic problem across the entire chatbot industry — from enterprise assistants to GPT wrappers in messaging apps. Developers bring a web page mindset to chat: more information equals better service. But a chat isn't a page. A chat is a conversation. And in a conversation, a wall of text isn't caring — it's disrespecting the user's time.
Anatomy of the Problem
A wall of text in a chatbot is a message that:
Why Does This Happen?
Walls of text have several root causes, and none of them are technical:
1. Fear of under-answering. The developer worries the user won't get the information they need, so they cram everything in "just in case." This is a classic cognitive bias — the curse of knowledge. You know about 15 nuances and assume they're all important. The user needs just one.
2. Copy-paste from the knowledge base. Many bots literally regurgitate documentation. An 800-word FAQ article gets dumped into chat wholesale. That's not adaptation — that's laziness. A chatbot should be a filter, not a relay.
3. LLMs are verbose by default. GPT-4, Claude, Gemini — they're all trained to generate thorough responses. Without explicit prompt engineering, an LLM will answer like an A+ student — exhaustive, structured, academic. For chat, that's a disaster.
4. No UX thinking. Bots are built by developers, not conversation designers. Developers think in features: "the bot should be able to explain pricing plans." Designers think in situations: "someone messages at 11 PM — they need an answer in 2 taps, not a lecture."
According to Intercom (2023), 73% of users abandon the chat if the bot's first response is longer than 3 sentences. The average bounce rate for bots with "walls of text" is 68%, compared to 24% for bots with short replies and buttons. That's a 3x difference in retention.
2. What the Research Says
The wall-of-text problem isn't a subjective feeling. Over the past 5 years, a body of research has emerged that clearly demonstrates: message length is inversely proportional to engagement.
Nielsen Norman Group: "How People Read Online" (2020, updated 2023)
NNG's classic study on web reading patterns fully applies to chatbots — and the effect is amplified. In a messenger, users are even less patient than on a website:
| Metric | Web Page | Chatbot |
|---|---|---|
| Average time before scrolling | 8-12 seconds | 2-3 seconds |
| % of text actually read | 20-28% | 12-18% |
| Optimal block length | 40-60 words | 15-30 words |
| Scanning pattern | F-pattern (prominent) | L-pattern (beginning only) |
Key NNG insight: in chat interfaces, an L-pattern reading behavior prevails — users read the first 1-2 lines, then their eye slides down the left edge, and if they don't see a visual "anchor" (button, emoji, bold text) — they stop reading.
Intercom: "State of Messaging" (2022-2024)
Intercom analyzed 500 million conversations across their customer base and found a crystal-clear correlation:
Drift: "Conversational Marketing Benchmark" (2023)
Drift studied 100,000+ B2B chatbot conversations and found that the number of messages (turns) matters more than their length:
Bots that broke their response into 3-4 short messages instead of one long one showed:
Google: "Design Guidelines for Conversational AI" (2024)
An internal Google study for Dialogflow and Bard identified the "three-line rule": on a mobile device, the optimal bot message takes up no more than 3 screen lines (roughly 60-90 characters). Anything longer requires scrolling — and scrolling in chat is perceived as "the bot talks too much."
Additional findings from Google:
Microsoft Research: "Human Parity in Conversational AI" (2023)
Microsoft's research revealed a surprising finding: users judge bot quality NOT by answer completeness, but by the feeling of dialogue. A bot that asks clarifying questions and responds briefly (even if the answer is incomplete) receives higher CSAT scores than a bot that delivers an exhaustive answer in a single message.
3. Principles of Conversational UX
Now that we've seen the scale of the problem and the data, it's time to explore the principles. Conversational UX (CUX) isn't just "write shorter." It's a fundamental rethinking of how information is delivered in a dialogue format.
Principle 1: Chunking
Chunking means breaking one large response into several smaller ones. Not "shorten the text," but "split it into steps." Each chunk is a self-contained unit that makes sense on its own.
Notice: the refund information didn't disappear — it's accessible via a button. The bot didn't become less useful; it became more respectful of the user's attention.
Principle 2: Progressive Disclosure
Progressive disclosure is one of the fundamental principles of UX design, brilliantly described in John Maeda's work ("Laws of Simplicity"). The core idea: show only what's needed right now. Everything else — on demand.
In the context of a chatbot, this means:
An analogy: a good doctor first says "You have gastritis, take Omeprazole." Only if you ask does he explain the pathogenesis, alternative diagnoses, and the drug's history. A bad doctor starts with the pathogenesis.
Principle 3: Turn-Taking
Turn-taking mimics the natural rhythm of conversation. In real dialogue, people alternate turns: speak — listen — respond. A bot that talks for 5 minutes straight isn't a conversationalist — it's a lecturer.
Turn-taking rules for chatbots:
Principle 4: Information Hierarchy
Every message should have a clear hierarchy:
┌─────────────────────────────────────────┐ │ 🔴 Key answer (1 line) │ ← Read by 100% │ ─────────────────────────────────────── │ │ 🟡 Explanation (1-2 lines) │ ← Read by 60% │ ─────────────────────────────────────── │ │ 🟢 Action / question / buttons │ ← Clicked by 40% │ ─────────────────────────────────────── │ │ ⚪ Additional info (hidden) │ ← Needed by 10% └─────────────────────────────────────────┘
This pyramid is an adaptation of the journalistic "inverted pyramid" principle for chat interfaces. The most important content goes at the top. Each subsequent layer is less critical. The user can stop at any level and still get value.
Principle 5: Emotional Calibration
The bot's tone should match the context. This isn't about being "cheerful" or "serious" — it's about adapting to the user's situation:
| Situation | Tone | Example |
|---|---|---|
| Simple question | Light, friendly | "The Pro plan is $9.99/mo. Want to try it?" |
| Error / issue | Empathetic, specific | "I see a payment issue. Let me sort this out." |
| Complaint | Serious, respectful | "I understand your frustration. Escalating to a manager." |
| Onboarding | Encouraging, step-by-step | "Great, you're in! First step — let's set up your profile." |
4. Good Chatbot Patterns
From principles to specific techniques. Here's a pattern library you can implement today.
Pattern 1: "Answer + Ask" (Answer-Ask)
The most powerful pattern. The bot answers the question and immediately asks a clarifying one. This creates a natural conversational flow and helps narrow the context.
Instead of a table with 5 plans — one answer and a question. The bot "earns" a second turn to show exactly the plan the user needs.
Pattern 2: "Summary + Expand" (Summary-Expand)
When the answer is objectively long (instructions, overview, comparison) — give a summary and offer to expand:
Pattern 3: "Step-by-Step Guide" (Step-by-Step)
For instructions — don't give all steps at once. Guide one at a time:
Benefits: users don't get lost in long instructions, can report issues at each step, and feel a sense of progress.
Pattern 4: "Confirm + Act" (Confirm-Act)
Before an important action — confirm understanding:
Pattern 5: "Carousel Instead of a List" (Carousel)
When you need to show multiple options (products, plans, articles) — not a 15-item list, but a carousel of 3-4 cards with a "more" button.
In Telegram, these are inline buttons with pagination. In web chats — horizontally scrollable cards. The principle is the same: don't overload the first screen.
Code: Implementing Chunking in a Telegram Bot
Here's a concrete example in Python (aiogram 3.x) — turning a wall of text into a conversation:
from aiogram import Router, F
from aiogram.types import Message, CallbackQuery
from aiogram.utils.keyboard import InlineKeyboardBuilder
router = Router()
# ❌ BAD: wall of text
@router.message(F.text == "/tariffs")
async def bad_tariffs(message: Message):
await message.answer(
"У нас 3 тарифа:\n\n"
"1. Basic — 490₽/мес\n"
" - 1000 запросов\n"
" - Email поддержка\n"
" - 1 пользователь\n\n"
"2. Pro — 990₽/мес\n"
" - 10000 запросов\n"
" - Приоритетная поддержка\n"
" - 5 пользователей\n"
" - API доступ\n"
" - Аналитика\n\n"
"3. Enterprise — от 4990₽/мес\n"
" - Безлимит запросов\n"
" - Персональный менеджер\n"
" - SLA 99.9%\n"
" - Безлимит пользователей\n"
" - Кастомные интеграции\n\n"
"Выбирайте!"
)
# ✅ GOOD: progressive disclosure
@router.message(F.text == "/tariffs")
async def good_tariffs(message: Message):
kb = InlineKeyboardBuilder()
kb.button(text="👤 Для себя", callback_data="tariff_personal")
kb.button(text="👥 Для команды", callback_data="tariff_team")
kb.button(text="🏢 Enterprise", callback_data="tariff_enterprise")
kb.adjust(2)
await message.answer(
"Тарифы от 490₽/мес. Для кого подбираем?",
reply_markup=kb.as_markup()
)
@router.callback_query(F.data == "tariff_personal")
async def tariff_personal(callback: CallbackQuery):
kb = InlineKeyboardBuilder()
kb.button(text="🚀 Попробовать бесплатно", callback_data="trial")
kb.button(text="📊 Сравнить с Pro", callback_data="compare_basic_pro")
kb.button(text="⬅️ Назад", callback_data="tariffs_back")
kb.adjust(1)
await callback.message.edit_text(
"**Basic — 490₽/мес**\n\n"
"1000 запросов, email-поддержка. "
"Идеально для старта.",
reply_markup=kb.as_markup(),
parse_mode="Markdown"
)
Notice the difference: the "bad" version has 20 lines of text. The "good" version has 1 line + 3 buttons. Same information, but access is driven by the user.
Code: Prompt Engineering for LLM Bots
If your bot runs on GPT/Claude, the key is a well-crafted system prompt:
SYSTEM_PROMPT = """ Ты — помощник в Telegram-чате. ПРАВИЛА ОТВЕТОВ: 1. ДЛИНА: Максимум 3 предложения на сообщение. Если нужно больше — разбей на части. 2. СТРУКТУРА каждого сообщения: - Прямой ответ (1 предложение) - Краткое пояснение (1-2 предложения) - Вопрос ИЛИ предложение действия 3. НЕ ДЕЛАЙ: - Нумерованные списки длиннее 5 пунктов - Вступления вроде "Конечно! Отличный вопрос!" - Заключения вроде "Надеюсь, это помогло!" - Повторение вопроса пользователя 4. ЕСЛИ ответ объективно длинный: - Дай резюме в 2 предложения - Спроси: "Рассказать подробнее о [конкретный аспект]?" 5. ТОН: Как умный друг, не как энциклопедия. Говори "490₽/мес", не "стоимость составляет 490 рублей в месяц". """
Tidio (a chatbot platform) ran an A/B test: bots with an optimized prompt showed +43% CSAT and -31% average handle time compared to default LLM settings. Users got the same information, but faster and more pleasantly.
5. Anti-Patterns: What You Must Never Do
If the previous section was "how to do it right," this one is "how to definitely do it wrong." Every one of these anti-patterns appears in dozens of popular bots right now.
Anti-Pattern 1: 🎪 Emoji Circus
🔥 We have AMAZING NEWS! 💥
✨ New PRO MAX ULTRA plan ✨
💰 Only $9.99/mo!!! 🤑
🚀 Speed x10! ⚡
💪 Power x100! 💣
👉 Click HERE ➡️ to 🔓 UNLOCK 🔓
Why it's bad: Every emoji is a visual object that requires brain processing. 3-4 emoji per message help scanning. 15+ emoji turn text into noise. Research shows that after 4 emoji per message, text comprehension drops by 25% (Adobe Emoji Trend Report, 2023).
Rule: Maximum 1 emoji per paragraph. Use them as markers (📌, ✅, ⚠️), not as decoration.
Anti-Pattern 2: 📋 The Endless List
1. Profile name
2. Avatar
3. Description
4. Time zone
5. Interface language
6. Theme
7. Email notifications
8. Push notifications
9. SMS notifications
10. Telegram notifications
11. Two-factor authentication
12. Social media linking
13. API keys
14. Webhooks
15. Data export
...
Why it's bad: Hick's Law states that decision time grows logarithmically with the number of options. A 15-item list paralyzes — the user can't choose and leaves. The sweet spot is 3-5 options (Miller's Law: 7±2, but even fewer in chat due to the vertical format).
Solution: Categorization. Instead of 15 items — 3 categories: "Profile," "Notifications," "Security."
Anti-Pattern 3: 🎩 Corporate Speak
"Dear user! Thank you for contacting our company's support service. Your request has been registered under number #47291. Processing time is up to 24 business hours. For additional information, please refer to the FAQ section."
"Got it! Ticket #47291 — we'll respond within the day. In the meantime, maybe we can find an answer faster?"
[Button: 🔍 Search FAQ]
Why it's bad: Formal language creates distance. Chat is a place for conversation, not business correspondence. A Drift study found that bots with a conversational tone get 37% more replies than formal bots.
Anti-Pattern 4: 🪞 The Echo Bot
Why it's bad: Repeating the question, empty validations ("Great question!"), defining obvious things — that's noise. Every word that carries no new information steals attention. LLMs love doing this — and the prompt should explicitly forbid it.
Anti-Pattern 5: 🤖 "I'm Just a Bot"
Why it's bad: Self-deprecation + long apology + no action = maximum user frustration. They already know they're talking to a bot.
How to do it: "That one needs a human. Passing you to an agent — they'll respond within 15 minutes." [Button: 📞 Call now]
Anti-Pattern 6: 📝 "Textbook in Chat"
This is perhaps the most common anti-pattern in AI bots: the LLM-style response where every answer is a mini-article with headings, subheadings, numbered lists, and a conclusion.
"## How to Set Up Notifications
Setting up notifications is an important step in personalizing your account. There are several types of notifications:
### 1. Email Notifications
Email notifications are sent to your...
### 2. Push Notifications
Push notifications are...
[400 more words]
I hope this guide was helpful! If you have any additional questions, don't hesitate to reach out."
Solution: Prompt engineering + post-processing. Trim the LLM's response to 3 sentences, add a "Learn more" button with the full version.
6. Case Studies: Who's Already Doing It Right
Theory is great. But the best lessons come from companies that went from walls of text to conversations and measured the results.
Case Study 1: Intercom — From FAQ Bot to Resolution Bot
Intercom is a platform serving 25,000+ companies. Their own bot went through three generations:
Resolution rate grew from 15% to 52% — a 3.5x improvement. The key factor: not AI itself, but conversational UX principles built into the AI. Fin responds briefly not because it "doesn't know," but because that's the best strategy for keeping users engaged in the dialogue.
Case Study 2: Drift — Conversational Marketing
Drift pioneered "conversational marketing," where a chatbot replaces lead capture forms on websites. Their key discovery:
Drift's specific principles:
Case Study 3: Tinkoff — Banking Bot
Tinkoff's bot (Oleg) is one of the best examples of conversational UX in Russian fintech. Here's what they do right:
Case Study 4: Revolut — Multilingual Bot
Revolut serves 35+ million users across 30+ languages. Their challenge is scale:
The solution: response templates with dynamic inserts. Every answer is a maximum of 2 sentences + buttons. Templates are localized not just by translation, but by cultural adaptation: the Japanese bot is more polite, the Brazilian one more informal, the German one more structured.
Result: 67% of inquiries automatically resolved without agent escalation. CSAT consistently above 4.2/5 across all regions.
Case Study 5: Woebot — Therapeutic Chatbot
A special case: Woebot is an AI bot for cognitive behavioral therapy (CBT). Here, walls of text aren't just inconvenient — they're a therapeutic contraindication.
Woebot's principles:
7. Metrics: How to Measure Conversation Quality
"What you can't measure, you can't improve." Here's a metrics framework for evaluating conversational UX quality.
Core Metrics (Must Have)
| Metric | What It Measures | Target Value | How to Calculate |
|---|---|---|---|
| Response Rate | % of bot messages that received a user reply | > 60% | user_replies / bot_messages |
| Task Completion Rate | % of started scenarios completed | > 70% | completed_flows / started_flows |
| CSAT (Customer Satisfaction) | User satisfaction | > 4.0/5 | Post-dialogue survey |
| Bounce Rate | % of users who left after the bot's 1st message | < 30% | single_message_sessions / all_sessions |
| Avg. Turns per Session | Average number of turns per dialogue | 4-8 | total_messages / total_sessions |
Advanced Metrics (Nice to Have)
How to Collect Metrics: Code Example
import time
from dataclasses import dataclass, field
from typing import Optional
@dataclass
class ConversationMetrics:
session_id: str
started_at: float = field(default_factory=time.time)
bot_messages: int = 0
user_messages: int = 0
bot_total_chars: int = 0
user_total_chars: int = 0
buttons_shown: int = 0
buttons_clicked: int = 0
task_completed: bool = False
escalated: bool = False
csat_score: Optional[int] = None
@property
def response_rate(self) -> float:
"""% сообщений бота, получивших ответ."""
if self.bot_messages == 0:
return 0
return min(self.user_messages / self.bot_messages, 1.0)
@property
def message_length_ratio(self) -> float:
"""Соотношение длины сообщений бот/пользователь."""
avg_bot = self.bot_total_chars / max(self.bot_messages, 1)
avg_user = self.user_total_chars / max(self.user_messages, 1)
return avg_bot / max(avg_user, 1)
@property
def button_ctr(self) -> float:
"""Click-through rate кнопок."""
if self.buttons_shown == 0:
return 0
return self.buttons_clicked / self.buttons_shown
@property
def session_duration(self) -> float:
return time.time() - self.started_at
def to_dict(self) -> dict:
return {
"session_id": self.session_id,
"duration_sec": round(self.session_duration, 1),
"turns": self.bot_messages + self.user_messages,
"response_rate": round(self.response_rate, 2),
"mlr": round(self.message_length_ratio, 1),
"button_ctr": round(self.button_ctr, 2),
"completed": self.task_completed,
"escalated": self.escalated,
"csat": self.csat_score,
}
Dashboard: What to Track
A minimal dashboard for monitoring conversational UX quality:
8. Practical Checklist for Developers
Print this out (or copy it to Notion) and check every bot response against this list.
Length and Structure
- Each message is a maximum of 3-5 sentences (40-60 words)
- If the answer is longer — split into 2-3 separate messages or use a "Learn more" button
- First sentence is a direct answer to the question, no preambles
- No numbered lists longer than 5 items
- No paragraphs longer than 2-3 lines on a mobile screen
Interactivity
- Every message ends with a question or buttons
- Quick replies (buttons) — 2-4 options, no more
- There's an "escape hatch" — an option to reach an agent at every step
- Step-by-step instructions guide one step at a time with a "Done, next" button
- For long content — a "Learn more" button instead of a data dump
Tone and Language
- Conversational style: "$9.99/mo" instead of "the cost is nine dollars and ninety-nine cents per month"
- No filler phrases: "Great question!", "Of course!", "Happy to help!"
- No repeating the user's question at the start of the answer
- No self-deprecation: "I'm just a bot..."
- Emoji as markers (📌✅⚠️), not as decoration. Maximum 1-2 per message
Technical Implementation
- A 0.5-1.5 sec delay between consecutive messages (typing indicator)
- LLM prompt explicitly limits response length (max 3 sentences)
- Post-processing: trim to limit + programmatically add buttons
- Metrics logging: response rate, bounce rate, CSAT, MLR
- A/B tests: short vs long responses, with buttons vs without
Content Design
- All FAQ articles adapted for chat (not copy-pasted from the knowledge base)
- A visual dialogue map (conversational flow) for each scenario
- Bot tone documented in a voice & tone guide
- Regular review of "worst" dialogues (with low CSAT)
- User testing: 5+ real users test the bot before launch
According to combined data from Intercom, Drift, and Zendesk, implementing conversational UX principles delivers:
• +40-80% response rate (users reply instead of leaving)
• +25-50% task completion (scenarios are completed)
• +15-30% CSAT (satisfaction improves)
• -30-50% escalation rate (less load on agents)
Average payback period: 2-4 weeks after dialogue redesign.
Conclusion: Chat Is Not a Text Delivery Channel
The main takeaway of this research is simple: a chatbot is not an interface for delivering information — it's an interface for conversation. The difference is fundamental.
Information delivery is "here's your answer, figure it out." Conversation is "let's figure it out together." The first approach is efficient for the bot (one API call). The second is efficient for the human (lower cognitive load, higher satisfaction).
Every time you write a bot response, imagine: you're sitting across from a real person. They asked something simple. Would you lecture them for 5 minutes? No. You'd answer in one sentence and ask if they need details.
Do the same in your chatbot.
Three things you can do right now: