Why Gabriel Gonzalez's "Beyond Agentic Coding" Shows Voice AI Demos Must Preserve Flow State, Not Break It
# Why Gabriel Gonzalez's "Beyond Agentic Coding" Shows Voice AI Demos Must Preserve Flow State, Not Break It
**Meta Description:** Gabriel Gonzalez critiques agentic coding for breaking flow state. His "calm technology" principles reveal why Voice AI demos must be pass-through, attention-minimizing, and calm-preserving.
---
## The Master Cue: Keep Users in Flow State
From [Gabriel Gonzalez's post](https://haskellforall.com/2026/02/beyond-agentic-coding) (93 points on HN, 6 hours old, 24 comments):
**"A good tool or interface should keep the user in a flow state as long as possible"**
Gabriel Gonzalez (Haskell expert, author of Dhall, former Google engineer) just published a deeply technical critique of agentic coding tools—not because they don't work, but because **they break flow state**.
**His core findings:**
- Agentic coding users perform **worse** in interviews (failing to complete challenges)
- Research studies show **no productivity improvement** (Becker study, Shen study)
- Screen recordings reveal **idle time approximately doubled** when using AI coding agents
- Users stay in "interruptible holding pattern" instead of flow state
**This isn't just about coding.**
**It's about Voice AI demos—and why most demo agents break flow instead of preserving it.**
---
## The Three Principles of Calm Technology
Gabriel draws on [Calm Technology](https://calmtech.com/) design principles:
**1. Tools should minimize demands on our attention**
Interruptions and intrusions break us out of flow state.
**2. Tools should be built to be "pass-through"**
The tool should **reveal** the object of our attention (not obscure it). The more we use the tool, the more it fades into the background while still supporting our work.
**3. Tools should create and enhance calm**
A state of calm helps users enter and maintain flow state.
**How does agentic coding violate these principles?**
Gabriel's diagnosis:
**Agentic coding agents:**
- Place **high demands** on attention (user waits for agent to report back or stays interruptible)
- Are **not pass-through** (highly mediated, indirect, slow, imprecise—"English is a dull interface")
- **Undermine calm** (user must constantly stimulate chat, agents fine-tuned to maximize engagement)
**Voice AI demos face the exact same challenges.**
---
## Why Most Voice AI Demos Break Flow State
**Scenario: SaaS product demo**
**Traditional demo flow (manual):**
```
Sales engineer shows features
→ Prospect asks question
→ Sales engineer immediately navigates to answer
→ Prospect stays in flow (engaged, confident, exploring)
→ Total time in flow state: ~30 minutes
```
**Generic AI chatbot demo (current approach):**
```
Prospect asks "show me billing"
→ AI transcribes (200ms)
→ AI parses intent (300ms)
→ AI navigates (100ms)
→ AI generates response (200ms)
→ Total latency: 800ms
Prospect asks follow-up: "Can I export invoices?"
→ Another 800ms wait
→ Another 800ms wait
→ Another 800ms wait
Total time in flow state: Broken after first question
User experience: Waiting, interrupted, frustrated
```
**Gabriel's diagnosis applies perfectly:**
**Generic AI demo agents:**
- **Demand attention**: Prospect must wait for each response (800ms feels like eternity in conversation)
- **Not pass-through**: Highly mediated (user talks to chatbot about product, not directly with product)
- **Undermine calm**: Constant stimulation required (user has to keep asking, agent doesn't passively inform)
**The demo broke flow. The prospect leaves.**
---
## Gabriel's Examples of Calm Technology in Coding
Gabriel highlights two "boring" examples that embody calm design:
### Example 1: Inlay Hints (Type Annotations)
**What they do:**
```typescript
// Without inlay hints
const user = getUser();
// With inlay hints (grayed out, passive)
const user: User = getUser();
```
**Why they're calm:**
- **Minimize attention**: Exist on periphery, available if interested, unobtrusive if not
- **Pass-through**: Don't replace code, enhance it—user still directly edits code
- **Enhance calm**: Inform understanding **passively** ("technology can communicate, but doesn't need to speak")
### Example 2: File Tree Previews
**What they do:**
VSCode shows file changes at-a-glance:
```
src/
components/
+ UserProfile.tsx (new)
~ Billing.tsx (modified)
- OldDashboard.tsx (deleted)
```
**Why they're calm:**
- **Minimize attention**: There if needed, easy to ignore (or forget they exist)
- **Pass-through**: Interaction feels direct, snappy, precise—representation = reality
- **Enhance calm**: Passively updates in background, unobtrusive, not attention-grabbing
**Gabriel's key insight:**
> "The best tools (designed with calm technology principles) are pervasive and **boring** things we take for granted (like light switches) that have faded so strongly into the background we forget they even exist."
**Voice AI demos should be the same: invisible enablers of flow, not conversation bottlenecks.**
---
## The GitHub Copilot Test Case: What Works, What Doesn't
Gabriel analyzes GitHub Copilot's two features:
### Feature 1: Inline Suggestions (Mixed Results)
**What works:**
- **Pass-through**: User still interacts directly with code, suggestions are snappy
**What doesn't work (by default):**
- **Demands attention**: Appears frequently, user pauses to examine output—conditions user to be reactive instead of proactive
- **Undermines calm**: Visually busy, intrusive, center of visual focus—can't passively absorb information
**Gabriel's fix:** Disable automatic suggestions, require manual trigger (`Alt + \`). But this also disables the next feature (which he likes better).
### Feature 2: Next Edit Suggestions (Excellent Flow Preservation)
**What they do:**
```
You rename variable: userName → currentUser
Copilot suggests related edits throughout file/project:
Line 47: console.log(userName) → console.log(currentUser)
Line 89: return userName.email → return currentUser.email
Line 102: if (!userName) → if (!currentUser)
User cycles through suggestions, accepts/rejects each
```
**Why this keeps flow:**
- **Minimize attention**: Bite-sized suggestions (easier to review and accept)
- **Pass-through**: User stays in close contact with code they're modifying
- **Enhance calm**: Suggestions on periphery, don't demand immediate review—user can ignore or focus at leisure
**This is the model Voice AI demos should follow.**
---
## Applying Calm Technology to Voice AI Demos
Gabriel's calm technology principles → Voice AI demo design:
### Principle 1: Minimize Demands on Attention
**Bad (chatbot approach):**
```javascript
// User waits for every response
prospect.ask("show me billing");
await agent.transcribe(200ms);
await agent.parseIntent(300ms);
await agent.navigate(100ms);
await agent.respond(200ms);
// Total: 800ms per interaction
// User stays in "waiting mode"
```
**Good (calm approach):**
```javascript
// Predictive pre-rendering minimizes wait time
prospect.says("show me");
// Partial transcription at 300ms: "show"
agent.preload(['billing', 'users', 'reports', 'settings', 'dashboard']);
// Pre-render likely pages in hidden iframes
prospect.finishes("show me billing");
// Already rendered, just reveal
// Total visible latency: <100ms
// User stays in flow (feels instant)
```
**Attention demands reduced by 87.5%.**
### Principle 2: Build to Be "Pass-Through"
**Bad (chatbot approach):**
```
User: "Show me billing"
Chatbot: "Sure! Here's the billing section. You can view invoices,
set up payment methods, and manage subscriptions."
User: *reading chatbot response instead of looking at product*
User: *must ask chatbot for every action*
```
**Interaction is MEDIATED. User talks to chatbot ABOUT product, not WITH product.**
**Good (calm approach):**
```
User: "Show me billing"
Agent: *immediately navigates to billing page*
Agent: *highlights relevant features as user looks at them*
Agent: *answers questions without obscuring UI*
User directly interacts with product
Agent fades into background
More usage → agent more invisible → product more visible
```
**Interaction is DIRECT. Agent is pass-through lens, not conversation partner.**
**Gabriel's key quote:**
> "A tool is not meant to be the object of our attention; rather the tool should **reveal** the true object of our attention (the thing the tool acts upon), rather than obscuring it."
**Voice AI chatbot = object of attention (prospect focuses on chatbot responses)**
**Voice AI pass-through agent = product is object of attention (chatbot fades away)**
### Principle 3: Create and Enhance Calm
**Bad (chatbot approach):**
```javascript
// User must constantly stimulate chatbot
const chatbotExperience = {
userAction: 'Ask question',
agentAction: 'Wait for response',
userAction: 'Read response',
agentAction: 'Ask follow-up',
userAction: 'Wait for response',
// Constant back-and-forth creates anxiety
// "Is it understanding me?"
// "Why is this taking so long?"
// "Did I ask the right question?"
};
```
**Good (calm approach):**
```javascript
// Agent passively informs understanding
const calmExperience = {
// User explores product naturally
userAction: 'Look at billing page',
agentAction: 'Passive inlay hints appear (e.g., "Most customers use annual billing")',
userAction: 'Hover over invoice export',
agentAction: 'Subtle tooltip: "Exports to CSV, PDF, QuickBooks"',
userAction: 'Click payment methods',
agentAction: 'Page navigates instantly, contextual hints appear',
// Agent communicates without speaking (unless asked)
// User absorbs information passively
// Calm preserved
};
```
**Gabriel's principle:**
> "Technology can communicate, but doesn't need to speak."
**Voice AI doesn't mean ALWAYS voice. It means voice WHEN USEFUL, passive guidance otherwise.**
---
## Gabriel's Innovative Ideas for AI-Assisted Coding → Voice AI Demos
Gabriel proposes three "beyond agentic" features for coding. Each maps to Voice AI demos:
### Idea 1: Facet-Based Project Navigation
**Coding context:**
```
Instead of file tree:
src/components/UserProfile.tsx
src/components/Billing.tsx
Show semantic facets:
User Management
└─ Profile editing
└─ Authentication
Billing
└─ Invoices
└─ Payment methods
```
**Voice AI demo equivalent:**
```
Instead of generic navigation:
"You can go to Settings, Users, Billing, or Reports"
Show intent-based facets:
What are you trying to do?
└─ Set up my first workflow → Onboarding flow
└─ Understand pricing → Billing + ROI calculator
└─ See if this integrates with our stack → Integrations page
└─ Evaluate security → Security docs + compliance
```
**Calm principle: Semantic understanding improves with usage (not just product discovery).**
### Idea 2: Automated Commit Refactor
**Coding context:**
```
Take large messy commit:
- Changed auth flow
- Fixed bug in billing
- Updated dashboard UI
- Refactored database queries
AI splits into focused commits:
1. Auth: Migrate to OAuth 2.0
2. Billing: Fix invoice tax calculation
3. Dashboard: Update chart library
4. Database: Optimize user query performance
```
**Voice AI demo equivalent:**
```
Take long demo session:
- User explored 15 different pages
- Asked 30 questions
- Saw billing, users, reports, settings, integrations
AI generates focused recap:
"Here's what you discovered:
1. Billing supports annual plans (30% discount)
2. User permissions have role-based access control
3. Reports export to CSV, PDF, and QuickBooks
4. Integrates with your existing Salesforce + Slack stack"
```
**Calm principle: AI REDUCES human review labor (not creates it).**
### Idea 3: File Lens ("Focus on..." and "Edit as...")
**Coding context:**
```
Focus on "command line options"
→ Only show files/lines related to CLI parsing
→ Other code hidden/collapsed (Zen mode for feature domain)
Edit as "Python" (when file is Haskell)
→ Show Python representation of Haskell code
→ User edits in familiar language
→ AI back-propagates changes to Haskell
```
**Voice AI demo equivalent:**
```
Focus on "billing workflow"
→ Only show pages/features related to billing
→ Other product areas dimmed/hidden
→ User explores billing deeply without distraction
Edit as "my company's use case"
→ Show product configured for user's industry
→ Replace generic examples with user's data
→ User sees product as if already customized
```
**Calm principle: Lens adapts to user's mental model, not vice versa.**
---
## Why Gabriel's Critique Applies Even More to Voice AI Demos
**Gabriel's key observation about agentic coding:**
> "Chat agents are a highly mediated interface to the code which is **indirect** (we interact more with the agent than the code), **slow** (we spend a lot of time waiting), and **imprecise** (English is a dull interface)."
**Voice AI demos magnify all three problems:**
### 1. Indirect (Worse for Demos)
**Coding:** Mediated interface to code = slower development
**Demos:** Mediated interface to product = **lost sale**
**Why it's worse:**
- Developer using chatbot to write code: annoying but functional (code eventually works)
- Prospect using chatbot to explore product: **product never feels real** (trust never builds)
**Example:**
**Agentic coding (mediated but functional):**
```
Developer: "Create a user authentication function"
AI: *generates code*
Developer: *reviews, tweaks, tests*
Result: Code works (eventually)
```
**Voice AI chatbot (mediated and dysfunctional):**
```
Prospect: "Show me how billing works"
AI: "Billing is in the Billing section. You can view invoices..."
Prospect: *reads text instead of seeing product*
Result: Product never feels tangible → no purchase
```
**Mediation breaks trust in demos more than in coding.**
### 2. Slow (Worse for Demos)
**Coding:** Developer tolerates 800ms wait (part of compile/test cycle)
**Demos:** Prospect abandons after 3 seconds of latency
**Why it's worse:**
- Developer expects wait time (patience conditioned by years of slow compilers)
- Prospect expects instant response (patience conditioned by years of fast UIs)
**Gabriel's screen recording evidence:**
> "Idle time approximately doubled when using agentic coding"
**Voice AI demo equivalent:**
```
Manual demo: 0% idle time (sales engineer responds instantly)
AI chatbot demo: 50% idle time (waiting for transcription, parsing, response)
Prospect perception:
Manual demo: "This person knows their product"
AI chatbot demo: "This feels slow and robotic"
```
**Slowness kills trust faster in demos than in coding.**
### 3. Imprecise (Worse for Demos)
**Coding:** Imprecise English prompt → AI generates code → tests catch errors
**Demos:** Imprecise question → AI misunderstands → prospect loses confidence
**Why it's worse:**
- Coding has feedback loop (tests fail → developer corrects → code improves)
- Demos have no feedback loop (AI misunderstands → prospect concludes product is bad → leaves)
**Gabriel's quote:**
> "English is a [dull interface](https://www.cs.utexas.edu/~EWD/transcriptions/EWD06xx/EWD667.html)" (Dijkstra, 1982)
**Voice AI demo failure mode:**
```
Prospect: "Can I customize the dashboard?"
AI misinterprets as: "Can I change dashboard colors?"
AI: "Yes! You can select from 12 theme colors"
Prospect actually meant: "Can I add custom widgets?"
Prospect concludes: Product lacks flexibility → leaves
```
**Imprecision = lost deals in demos, not just slow coding.**
---
## The Becker Study Evidence: Why Flow State Matters
Gabriel cites the [Becker study](https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/) showing **agentic coding increases idle time**:
**Key finding:**
> "Screen recordings showed that idle time approximately doubled"
**Why this matters for Voice AI demos:**
**Developer idle time (coding):**
- Developer waits for AI to generate code
- Developer can check email, Slack, other tasks
- Context switch cost: annoying but acceptable
**Prospect idle time (demo):**
- Prospect waits for AI to respond
- Prospect has no other tasks (they're in a demo!)
- Context switch cost: **demo ends**
**The attention window is different:**
**Coding session:** 2-4 hours (developer stays engaged through multiple interruptions)
**Demo session:** 5-15 minutes (prospect leaves after 3 interruptions)
**Flow state breaks differently:**
**Coding:** Developer re-enters flow after each interruption (high activation energy, but sustainable)
**Demo:** Prospect never re-enters flow (each interruption reduces trust, compounding effect)
**Gabriel's observation about interview candidates applies even more to demos:**
> "Candidates who used agentic coding consistently performed **worse** than other candidates, failing to complete the challenge or producing incorrect results."
**Voice AI demo equivalent:**
> "Prospects who use chatbot demos consistently convert **worse** than prospects with manual demos, failing to understand the product or producing incorrect mental models."
**Flow state isn't optional—it's the difference between conversion and abandonment.**
---
## The "Boring" Tools Test: Is Your Voice AI Demo Invisible?
Gabriel's key insight:
> "The best tools (designed with calm technology principles) are pervasive and **boring** things we take for granted (like light switches) that have faded so strongly into the background we forget they even exist."
**Apply this test to Voice AI demos:**
**Question 1: Does the prospect remember the demo agent, or the product?**
**Bad sign:** "That AI chatbot was impressive!"
**Good sign:** "The product looks really powerful—I want to try it with our data"
**Question 2: Does the demo agent fade into the background, or demand attention?**
**Bad sign:** Prospect spends 80% of time reading chatbot responses, 20% looking at product
**Good sign:** Prospect spends 95% of time exploring product, 5% noticing agent guidance
**Question 3: Does usage make the agent more invisible, or more present?**
**Bad sign:** Prospect asks more questions over time (increasingly dependent on chatbot)
**Good sign:** Prospect asks fewer questions over time (learns product structure, agent becomes unnecessary)
**The light switch test:**
**Light switch (calm technology):**
- You flip switch → light turns on
- After 1 week: You don't even think about the switch, just the light
- After 1 year: Switch is invisible, only darkness/light matters
**Voice AI demo (calm technology):**
- You ask question → product reveals answer
- After 5 minutes: You don't think about the agent, just the product
- After 15 minutes: Agent is invisible, only product capabilities matter
**Voice AI chatbot (not calm):**
- You ask question → chatbot responds
- After 5 minutes: You're focused on asking the right questions
- After 15 minutes: Chatbot is the experience, product is secondary
**If the prospect remembers your demo agent more than your product, you failed the calm test.**
---
## Gabriel's "Next Edit Suggestions" → Voice AI "Next Exploration Suggestions"
Gabriel's favorite GitHub Copilot feature:
**Next Edit Suggestions (coding):**
```
You rename variable: userName → currentUser
Copilot suggests related edits:
Line 47: console.log(userName) → console.log(currentUser) [Accept]
Line 89: return userName.email → return currentUser.email [Accept]
Line 102: if (!userName) → if (!currentUser) [Accept]
User cycles through, accepting/rejecting each
```
**Why this preserves flow:**
- Bite-sized (easy to review)
- Pass-through (user stays in contact with code)
- Calm (suggestions on periphery, not center of attention)
**Voice AI demo equivalent:**
**Next Exploration Suggestions:**
```
User views billing page
Agent suggests related explorations (peripherally, not intrusively):
↗ "Most users also check: Payment methods" [Navigate]
↗ "Related: Invoice export options" [Show]
↗ "Common question: Annual vs monthly pricing" [Explain]
User clicks suggestion → instantly navigates
User ignores suggestion → continues exploring freely
```
**Why this preserves flow:**
- Bite-sized (one-click exploration, not conversation)
- Pass-through (user navigates product directly, not through chatbot)
- Calm (suggestions available but unobtrusive, user controls timing)
**This is the "super-charged find and replace" for demos:**
**GitHub Copilot Next Edit = AI-powered refactoring assistant**
**Voice AI Next Exploration = AI-powered demo navigator**
**Both keep user in flow. Both make AI invisible. Both embody calm technology.**
---
## Why "Chat Is the Least Interesting Interface to LLMs"
Gabriel's related post: ["Chat is the least interesting interface to LLMs"](https://haskellforall.com/2026/01/chat-is-least-interesting-interface-to)
**His argument:**
Chat interfaces are the **easiest** way to build LLM applications, but the **least innovative**.
**Why chat is overused:**
- Low barrier to entry (everyone can build a chatbot)
- Familiar mental model (text in → text out)
- Requires minimal UX design (conversation = universal interface)
**Why chat is limiting:**
- Forces linear interaction (one question at a time)
- Obscures context (user can't see "state" of conversation)
- Maximizes mediation (user talks ABOUT thing, not WITH thing)
**Voice AI demos fall into the same trap:**
**Why voice chatbots are overused:**
- Low barrier to entry (voice input → LLM → voice output)
- Familiar mental model (talk to AI like talking to person)
- Requires minimal UX design ("just ask me anything!")
**Why voice chatbots are limiting:**
- Forces linear interaction (one question at a time, no parallel exploration)
- Obscures context (user can't see what agent "knows" about their interests)
- Maximizes mediation (user talks ABOUT product, not WITH product)
**Gabriel's alternative vision:**
> "I strongly believe that chat is the least interesting interface to LLMs and AI-assisted software development is no exception to this."
**Voice AI alternative vision:**
**Chat-based demo:**
```
User: "Show me billing"
Agent: "Here's the billing section. You can view invoices, set up payment methods..."
User: *reads response*
User: "What about exporting?"
Agent: "Yes, you can export to CSV, PDF..."
```
**Beyond-chat demo (calm technology):**
```
User: "Show me billing"
Agent: *instantly navigates to billing page*
Agent: *inlay hints appear passively*
- "Annual plans save 30%" (appears near pricing toggle)
- "Export: CSV, PDF, QuickBooks" (appears near export button)
User: *explores product directly, absorbs information passively*
User: "What integrations do you have?"
Agent: *navigates to integrations page, highlights user's existing stack*
```
**Chat = conversation partner (demands attention, breaks flow)**
**Calm demo = invisible guide (passively informs, preserves flow)**
**The best interface makes LLM invisible. The best demo makes agent invisible.**
---
## The Interview Failure Pattern: Why Agentic Coders Perform Worse
Gabriel's shocking observation:
> "I allow interview candidates to use agentic coding tools and candidates who do so consistently performed **worse** than other candidates, failing to complete the challenge or producing incorrect results."
**Why this happens:**
**Agentic coding breaks the mental model:**
```
Non-agentic coder:
1. Reads problem
2. Thinks about solution
3. Writes code
4. Tests code
5. Mental model: "I understand this problem"
Agentic coder:
1. Reads problem
2. Prompts AI to write solution
3. Waits for AI
4. Copies AI output
5. Mental model: "The AI understands this problem"
Interview question: "Why did you choose this approach?"
Non-agentic coder: *explains reasoning*
Agentic coder: *doesn't know (AI made the choice)*
```
**Gabriel's footnote:**
> "Getting the correct output wasn't even supposed to be the hard part... vibe coders would not only fail to match the golden output but sometimes **not even realize** their program didn't match... because they hadn't even run their agentically coded solution to check if it was correct."
**Voice AI demo equivalent:**
**Manual demo (sales engineer):**
```
Sales engineer shows billing
Prospect: "Why is annual cheaper?"
Sales engineer: "We optimize for retention—annual customers cost us less to serve, so we pass savings to you"
Prospect mental model: "This person understands their business"
```
**AI chatbot demo:**
```
AI shows billing page
Prospect: "Why is annual cheaper?"
AI: "Annual plans offer a discount compared to monthly billing"
Prospect mental model: "This AI doesn't understand pricing strategy"
Prospect leaves
```
**The AI can't explain WHY because it doesn't have the sales engineer's expertise encoded.**
**Agentic coding without understanding → interview failure**
**AI demo without expertise encoding → sales failure**
**Same pattern, different domain.**
---
## Conclusion: Voice AI Demos Must Preserve Flow, Not Break It
Gabriel Gonzalez's critique of agentic coding reveals the fundamental flaw in chat-based AI assistants: **they break flow state**.
**His three calm technology principles apply perfectly to Voice AI demos:**
### 1. Minimize Demands on Attention
**Coding:** Inline suggestions appear passively, don't interrupt flow
**Demos:** Predictive pre-rendering reduces latency from 800ms → <100ms, keeps prospect exploring
### 2. Build to Be "Pass-Through"
**Coding:** User directly edits code, AI enhances experience without obscuring code
**Demos:** User directly explores product, AI guides navigation without obscuring product
### 3. Create and Enhance Calm
**Coding:** Inlay hints inform passively ("technology can communicate, but doesn't need to speak")
**Demos:** Contextual tooltips inform passively (agent guides without demanding conversation)
**Gabriel's key insight:**
> "The best tools are pervasive and **boring** things we take for granted (like light switches) that have faded into the background."
**Voice AI demos should be the same:**
**Bad demo:** Prospect remembers impressive AI chatbot
**Good demo:** Prospect remembers powerful product (agent invisible)
**Gabriel's evidence from coding applies even more to demos:**
**Agentic coding problems:**
- Idle time doubled (screen recordings)
- Interview candidates perform worse (fail to complete challenges)
- No productivity improvement (Becker study, Shen study)
**Voice AI chatbot demo problems:**
- Wait time kills trust (prospect expects instant response)
- Prospects convert worse (fail to understand product)
- No sales improvement (mediation breaks flow)
**The solution isn't to abandon AI—it's to design calm AI:**
**Calm Voice AI demo:**
- Predictive pre-rendering (attention-minimizing)
- Pass-through navigation (direct product interaction)
- Passive inlay hints (calm-preserving)
- Next exploration suggestions (flow-maintaining)
**Gabriel's conclusion:**
> "I strongly believe that chat is the least interesting interface to LLMs and AI-assisted software development is no exception to this."
**Voice AI conclusion:**
**Chat is the least interesting interface to demos.**
**Flow state is the goal. Calm technology is the method. Invisible agents are the result.**
**Don't build chatbots that break flow. Build calm agents that preserve it.**
---
## References
- Gabriel Gonzalez. (2026). [Beyond agentic coding](https://haskellforall.com/2026/02/beyond-agentic-coding)
- Gabriel Gonzalez. (2026). [My experience with vibe coding](https://haskellforall.com/2026/02/my-experience-with-vibe-coding)
- Gabriel Gonzalez. (2026). [Chat is the least interesting interface to LLMs](https://haskellforall.com/2026/01/chat-is-least-interesting-interface-to)
- Calm Technology. [Design principles](https://calmtech.com/)
- METR. (2025). [Becker study: AI-experienced OS dev study](https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/)
---
**About Demogod:** Voice AI demo agents built with calm technology principles. Predictive pre-rendering, pass-through navigation, passive guidance. Flow state preserved, not broken. Agents invisible, product visible. [Learn more →](https://demogod.me)
← Back to Blog
DEMOGOD