Ghostty's "Bad AI Drivers Will Be Banned and Ridiculed" Policy Shows Why Trust Requires Verification—Voice AI for Demos Proves the Same Pattern
# Ghostty's "Bad AI Drivers Will Be Banned and Ridiculed" Policy Shows Why Trust Requires Verification—Voice AI for Demos Proves the Same Pattern
Ghostty (the terminal emulator that hit 42K stars on GitHub) just published their AI Usage Policy.
**The final rule: "Bad AI drivers will be banned and ridiculed in public. You've been warned."**
Not "AI is banned." Not "AI is discouraged." But: **If you use AI without verification, you're out.**
The policy (298 points, 147 comments in 4 hours on HN) reveals a fundamental truth about AI in 2026:
**The problem isn't AI. It's unverified AI.**
Ghostty's maintainer Mitchell Hashimoto writes: "Our reason for the strict AI policy is not due to an anti-AI stance, but instead due to the number of highly unqualified people using AI. It's the people, not the tools, that are the problem."
The HN discussion splits between "Finally, someone said it" and "This is too harsh." But there's a deeper pattern here that applies directly to Voice AI for demos:
**Trust requires verification. Voice AI for demos works because DOM reading is verifiable. Ghostty's policy works because it demands the same: Test your AI-generated code before submitting.**
Both succeed by making verification mandatory, not optional.
## The Ghostty Policy: Six Rules to Ban "Bad AI Drivers"
Here's what Ghostty requires for AI usage in contributions:
**Rule 1: All AI usage must be disclosed.**
- State the tool (Claude Code, Cursor, Amp, etc.)
- State the extent of AI assistance
- No hiding, no ambiguity
**Rule 2: AI-generated PRs can only be for accepted issues.**
- Drive-by PRs without issue reference → closed
- If maintainer suspects undisclosed AI → closed
- Non-accepted issues → discussions only, not PRs
**Rule 3: AI-generated code must be fully verified with human testing.**
- No hypothetically correct code
- No code for platforms you can't test
- Manual verification required before submission
**Rule 4: Issues and discussions require human-in-the-loop editing.**
- AI content must be reviewed *and edited* by humans
- AI is "overly verbose and includes noise"
- Humans must trim and focus the content
**Rule 5: No AI-generated media (images, videos, audio).**
- Text and code only
- Human-created visuals required
**Rule 6: Bad AI drivers will be banned and ridiculed.**
- Not a threat—a promise
- Ghostty helps junior developers learn
- But AI without verification? Out.
**What this reveals:** Ghostty doesn't ban AI. They ban *unverified* AI. The policy is entirely about verification.
## Why This Policy Exists: The Submission Tsunami of Unverified AI Slop
Mitchell explains the context:
> "Every discussion, issue, and pull request is read and reviewed by humans. It is a boundary point at which people interact with each other and the work done. It is rude and disrespectful to approach this boundary with low-effort, unqualified work, since it puts the burden of validation on the maintainer."
Translation: **Unverified AI shifts validation burden from contributor to maintainer.**
**Before AI:**
- Contributor writes code
- Contributor tests code
- Contributor submits working PR
- Maintainer reviews for correctness, style, fit
- Burden: Shared (contributor validates, maintainer reviews)
**After AI (unverified):**
- Contributor uses AI to write code
- Contributor submits without testing
- Maintainer discovers it doesn't compile / has bugs / targets wrong platform
- Maintainer closes PR, explains why it's broken
- Burden: Shifted entirely to maintainer
**Ghostty's policy reverses this:** Require contributor to verify AI output *before* submission. Maintainer reviews working code, not debugging hypothetical code.
This is exactly the architectural difference between Voice AI for demos (verifies DOM before acting) and naive AI navigation (generates selectors without verification).
## The Parallel to Voice AI: DOM Verification Prevents "Bad AI Drivers"
Voice AI for demos has the same problem Ghostty faces: Users don't trust AI navigation because most AI navigation is unverified.
**What Ghostty's policy demands:**
- AI generates code → Human verifies it works → Submit
- No verification → No submission
- Verification prevents wasted maintainer time
**What Voice AI for demos does:**
- AI interprets user intent ("Click login") → System reads DOM to verify element exists → Execute
- No verification → Error returned ("No login button found")
- Verification prevents broken navigation
Both architectures succeed by making verification mandatory before execution.
## The Three Types of "Bad AI Drivers" Ghostty Bans (and Voice AI Prevents)
Ghostty's policy reveals three failure modes of unverified AI that Voice AI for demos also prevents:
### 1. Hypothetically Correct But Untested (The "It Should Work" Problem)
**Ghostty example:**
AI generates code for macOS when contributor only has Linux. Contributor submits PR without testing. Maintainer discovers it doesn't compile on macOS.
**Why it happens:** AI generates plausible code based on training data ("macOS uses these APIs"), but contributor doesn't verify platform-specific behavior.
**Ghostty's fix:** Rule 3 requires testing on actual platforms. No hypothetical correctness.
**Voice AI equivalent:**
AI generates `.login-button` selector because most sites use that class. System tries to click without reading DOM. Fails because actual site uses `#userSignIn`.
**Why it happens:** AI generates plausible selector from training data, but doesn't verify actual DOM structure.
**Voice AI's fix:** Read accessibility tree to find actual login elements. No hypothetical selectors.
Both fail when AI generates from memory instead of reading ground truth.
### 2. Overly Verbose and Noisy (The "AI Wrote Too Much" Problem)
**Ghostty example:**
AI writes detailed issue description with 5 paragraphs, 10 bullet points, and 3 code examples when the actual bug is "button doesn't respond to clicks."
**Why it happens:** AI trained on verbose documentation generates comprehensive explanations regardless of whether verbosity adds value.
**Ghostty's fix:** Rule 4 requires human editing to trim noise and focus on signal.
**Voice AI equivalent:**
AI generates verbose natural language responses ("I will now navigate to the login page by clicking on the button located in the top-right corner of the navigation bar...") when user just wants the action performed.
**Why it happens:** AI trained on conversational data generates explanations when user wants results.
**Voice AI's fix:** Execute navigation directly, provide concise confirmation ("Navigating to login"). User wants speed, not narration.
Both fail when AI prioritizes completeness over conciseness.
### 3. Untested Platform-Specific Code (The "Works on My Machine" Problem)
**Ghostty example:**
AI generates Linux-specific code. Contributor only tests on Linux. PR breaks macOS and Windows builds.
**Why it happens:** AI doesn't know contributor's testing constraints. Contributor trusts AI's cross-platform claims without verification.
**Ghostty's fix:** Rule 3 forbids code for platforms contributor can't test. If you don't have the platform, don't submit code for it.
**Voice AI equivalent:**
AI generates navigation for Chrome. System doesn't verify browser compatibility. Navigation breaks in Firefox/Safari due to different DOM APIs.
**Why it happens:** AI trained on Chrome examples generates Chrome-specific code. System trusts AI without browser verification.
**Voice AI's fix:** Read DOM using standard accessibility APIs that work across browsers. Verify element exists before acting.
Both fail when AI assumes universality without verification.
## The Cost of Unverified AI: Why Ghostty Had to Write This Policy
Ghostty's policy exists because unverified AI created measurable costs:
**Before the policy:**
- Maintainers spent hours debugging AI-generated PRs
- Contributors submitted untested code expecting maintainers to validate
- Low-quality PRs outnumbered high-quality PRs
- Maintainer burnout from validation burden
**The trigger (from HN comments):**
Users reported PRs with:
- Code targeting macOS when contributor only had Linux
- Verbose issue descriptions that buried the actual bug
- Solutions to problems that didn't exist (AI hallucinated the bug)
- Copy-pasted AI responses with "As an AI language model..." still in the text
**The result:** Mitchell wrote the policy, deployed it, and now enforces it with "ban and ridicule" consequences.
**Voice AI for demos faces the same pattern:**
**Before DOM verification:**
- Users clicked "Try Demo" expecting guided navigation
- AI generated plausible selectors without reading page
- Navigation failed ("Element not found")
- Users bounced, blamed AI for being unreliable
**The trigger:**
Demos with:
- Selectors for elements that didn't exist (AI hallucinated structure)
- Navigation to wrong elements (AI guessed from class names)
- Failures on edge cases (AI trained on common patterns, not actual site)
**The result:** DOM-reading architecture, deployed as core system design, now prevents all three failure modes.
## Why "Ban and Ridicule" Sounds Harsh But Is Correct
The HN discussion debates whether "ban and ridicule" is too extreme. Mitchell's reasoning:
> "We love to help junior developers learn and grow, but if you're interested in that then don't use AI, and we'll help you."
Translation: **AI doesn't help you learn. Humans do.**
The policy distinguishes between:
**Junior developers without AI:**
- Write code, get stuck, ask for help
- Maintainers explain the mistake, show correct approach
- Developer learns, improves, becomes contributor
- Outcome: Community grows, codebase improves
**Junior developers with AI (unverified):**
- Let AI write code, submit without understanding
- Maintainers find bugs, close PR
- Developer doesn't learn (AI wrote it, developer doesn't understand the fix)
- Outcome: Maintainer wasted time, developer didn't grow
**"Ban and ridicule" targets the second group.** Not because they used AI, but because they used it as a substitute for understanding.
Voice AI for demos has the same principle:
**User explores demo manually:**
- Clicks around, discovers features, learns product
- Gets stuck, asks Voice AI for help
- Voice AI navigates to relevant section, explains feature
- Outcome: User understands product, converts
**User lets Voice AI do everything:**
- Doesn't explore, just asks Voice AI for a tour
- Voice AI narrates features without user engagement
- User watches passively, doesn't interact
- Outcome: User didn't engage with product, bounces after demo
The architecture difference: Voice AI *guides* exploration (user-initiated commands), doesn't *replace* it (automatic tours).
## The Three Reasons Verification-First AI Scales Better Than Trust-First AI
Ghostty's policy reveals why requiring verification scales better than trusting AI by default:
### 1. Verification Prevents Compound Errors (Catches Problems Early)
**Trust-first AI (Ghostty without policy):**
1. Contributor submits unverified AI-generated PR
2. Maintainer spends 30 minutes reviewing
3. Discovers code doesn't compile
4. Closes PR, writes explanation
5. Contributor fixes, resubmits
6. Maintainer reviews again (another 30 minutes)
7. Discovers new bugs from AI's "fix"
8. Cycle repeats
**Result:** Hours wasted debugging AI errors that contributor never tested.
**Verification-first AI (Ghostty with policy):**
1. Contributor uses AI to generate code
2. Contributor tests locally, finds compilation error
3. Contributor fixes or asks AI to regenerate
4. Contributor verifies fix works
5. Submits working PR
6. Maintainer reviews for style/fit, not correctness
**Result:** Minutes spent reviewing working code.
Voice AI has the same advantage:
**Trust-first AI navigation:**
1. User asks "Click login"
2. AI generates `.login-button` selector
3. System tries to click without verification
4. Element doesn't exist → error
5. User asks again
6. AI generates new selector
7. Still wrong → error
8. User gives up
**Verification-first AI navigation:**
1. User asks "Click login"
2. AI interprets intent
3. System reads DOM for login elements
4. Finds `
← Back to Blog
DEMOGOD