"We want it to work like ChatGPT" is the most common brief we hear for AI features. It is almost always the wrong answer. ChatGPT is a chat interface for a general-purpose assistant. Most products are not general-purpose assistants. They are narrow workflows where a chat box is the least efficient way to get work done.
The three patterns — chat UI, conversational UI, and agent UI — sound like synonyms. They are not. They have different goals, different affordances, and different failure modes. Picking the wrong one is one of the top reasons AI products feel off even when the underlying model is excellent.
This is the design language we use on our team when scoping an AI feature.
The three patterns, defined
Strip away the marketing and the three patterns break down cleanly:
Chat UI is a turn-based text interface. You type, the assistant responds, you type again. The interface is a scroll of messages. Input is always free-form. The user drives every turn.
Conversational UI is a broader category where language is the primary input but the interface does work to guide the user. Suggested replies, slot-filling forms, mixed-media answers, buttons that advance state. The interface is still a dialogue but it is structured, not open-ended.
Agent UI is an interface where the AI executes multi-step tasks autonomously and the user observes, approves, or steers. The user initiates a goal ("book me a flight to Chicago under $400 for Thursday morning"), and the interface surfaces progress, intermediate results, and decision points. The interface is a workspace, not a chat.
The common thread is natural language. The difference is how much of the work the interface does versus how much the user carries.
A quick comparison
| Dimension | Chat UI | Conversational UI | Agent UI |
|---|---|---|---|
| Primary surface | Message list | Guided flow | Task workspace |
| User effort | High (every turn is free-form) | Medium (structured prompts) | Low (set goal, monitor) |
| Best for | Exploratory tasks, research, open-ended Q&A | Support, onboarding, structured lookups | Long-running tasks with tool use |
| Latency tolerance | Seconds | Sub-second feels better | Minutes is fine if observable |
| Failure mode | User does not know what to ask | Constrained to the flow's vocabulary | User loses trust if agent goes off-script |
| Example | ChatGPT, Claude.ai | Intercom Fin, Klarna's assistant | Cursor's agent mode, Manus |
The right question is not "which is best," it is "which matches the task my users are doing?"
When chat UI is the right call
Chat UI is great when the user genuinely does not know what they want to ask until they start typing. Research, open-ended writing help, debugging a hard problem, exploring a dataset. The cost of the free-form surface is that the user carries the whole conversation.
Design principles we follow when we do build a chat UI:
- Show capability on the empty state. A blank chat with a blinking cursor is hostile. Sample prompts, examples, and category chips prime the user and reduce the "what do I type?" problem.
- Slash commands and attachments are first-class. Let power users bypass natural language for common actions.
- Render beyond text. Tables, charts, code blocks, diff views, embedded forms. A chat answer that is a paragraph when it should be a table is a UX bug.
- Keep the input tall enough to write a paragraph. Shallow inputs signal "one sentence please," which is often not what the task needs.
- Persist and title conversations. Users need to find what they said yesterday.
Where chat UI falls apart: transactional tasks with clear inputs. "What is my account balance?" is not a conversation. It is a screen.
Where conversational UI actually shines
Most AI features in real products should be conversational UI, not chat. The difference: the interface does work so the user does not have to.
Concretely, conversational UI looks like:
- A support bot that shows three suggested questions the moment it opens, then expands into a richer flow based on the choice.
- A travel assistant that asks "departure city, destination, dates" as slot-filling chips, then flips to free-form only when something is ambiguous.
- A billing assistant that renders your invoice inline, with buttons for "download PDF," "dispute charge," and "update payment method," next to the explanation.
What makes conversational UI work:
- Mixed-initiative by default. The interface asks clarifying questions when the user's intent is unclear, rather than guessing and being wrong.
- Structured outputs when structure exists. If the answer is a list, render a list. If it is a decision, render buttons. Plain prose is the last resort, not the first.
- Shallow branching. Two or three suggested next steps after each answer. Not eight.
- Graceful escape to free-form. The typed input is always there. Users who want to type a novel can. Users who just want to click a button can too.
A well-designed conversational UI feels like a form that also talks. That is a good thing.
Agent UI is a different product
Agent UI is the newest and the most misunderstood. It is not "chat with more features." It is a workspace for observing and steering an AI that is doing work on your behalf.
The interaction model:
- User declares a goal. Natural language, one shot. "Refactor the billing module to use the new pricing engine."
- Agent plans. A visible plan or task list appears. The user can edit, approve, or override.
- Agent executes. Steps run, tools are called, results accumulate. The user sees progress.
- Agent pauses at decision points. "I need to run a migration. Confirm?"
- User reviews output. Final result is not a message. It is a diff, a document, a dashboard.
The design language shifts completely:
- The interface is a workspace, not a conversation. Chat is at most a sidebar.
- Observability is the core UX. Users must be able to see what the agent is doing, why, and be able to stop it.
- Approvals are explicit. Irreversible actions — database writes, emails sent, money moved — should require confirmation even if it slows the agent down.
- State is durable. Agents that take five minutes need to survive tab reloads, network hiccups, and users walking away.
- Errors are actionable. "I failed because the API returned 429" is useful. "Something went wrong" is useless.
Cursor's agent mode, the Claude Code CLI, and tools like Manus are good reference points. They are not chat. They are a terminal, an editor, and a task board welded together, with an LLM driving.
The decision framework
When a team comes to us with an AI feature brief, we ask four questions:
- How well can the user articulate what they want? If the answer is "clearly, in a sentence" — conversational UI. If "vaguely, and they will know it when they see it" — chat UI. If "at a goal level, but the work takes many steps" — agent UI.
- How much work is there per turn? If it is one model call, it is chat or conversational. If it is 20 tool calls over five minutes, it is agent.
- How reversible is the work? If mistakes are cheap (generating ideas), agents can run loose. If mistakes are expensive (sending emails, committing code, charging cards), you need explicit checkpoints.
- Does the user need the answer or the process? Chat and conversational surface the answer. Agent UI surfaces the process because that is what the user is monitoring.
If you cannot answer these four questions, the feature is not scoped enough to design yet. That is a useful finding on its own.
Anti-patterns we see constantly
Across audits and redesigns, the same mistakes show up:
- Chat UI for transactional flows. "What is my balance?" chatbots that force a conversation instead of showing a number. Always worse than a screen.
- Conversational UI with no escape hatch. Flows that box the user in with three buttons and no typed input. Users hit a wall the first time their question does not match a button.
- Agent UI disguised as chat. Long-running tool use stuffed into a message bubble. No progress, no approval, no observability. When it fails, the user has no recovery path.
- Using natural language where a form would do. If you already have six required fields, a form is better than a chatbot. Natural language is a cost, not a freebie.
- Skipping empty states. Every AI surface needs a good empty state. "What can I do here?" is the first question every user asks.
What good looks like in practice
The AI features we are proudest of shipping share a few traits:
- They pick one pattern and commit. No hybrids.
- The interface tells the user what it can do before the user has to ask.
- Ambiguity is met with clarifying questions, not guesses.
- Reversibility is mapped: irreversible actions require confirmation, reversible ones do not.
- The text input is never the only affordance.
Good AI UX is not "add a chat box." It is picking the interaction pattern that fits the task and designing around it.
Where we come in
We design and build AI features across all three patterns — chat, conversational, and agent. The right one depends on your users and the work they are doing, not on what is trendy this quarter.
If you are scoping an AI feature and want a second opinion on the interaction model before you commit, learn more about our UX/UI design and AI integration work, or reach out and we can walk through it.
