Miklós Koren: "Vibe Coding Kills Open Source" (118 HN Points, 89 Comments)—AI Agents Assemble OSS Without User Engagement, Weakening Maintainer Returns—Voice AI for Demos Takes Same Path: Users Skip Docs/Support, Navigate via Voice, Change Demo Economics

# Miklós Koren: "Vibe Coding Kills Open Source" (118 HN Points, 89 Comments)—AI Agents Assemble OSS Without User Engagement, Weakening Maintainer Returns—Voice AI for Demos Takes Same Path: Users Skip Docs/Support, Navigate via Voice, Change Demo Economics ## Meta Description Miklós Koren's "Vibe Coding Kills Open Source" paper (118 HN points) shows how AI agents assembling OSS without user engagement weakens maintainer returns—lower entry, reduced quality despite higher productivity. Voice AI for demos follows identical pattern: users skip docs and support tickets, navigate via voice commands, force demo creators to rethink compensation beyond traditional user engagement metrics. Both prove AI intermediation changes value delivery economics. --- ## H1: Miklós Koren's Vibe Coding Paper Tops Hacker News—AI Agents Assembling OSS Without User Engagement Weakens Maintainer Returns, Reduces Quality Despite Higher Productivity—Voice AI for Demos Applies Same Economics: Skip Docs, Navigate via Voice, Change Compensation Models Miklós Koren (with Gábor Békés, Julian Hinz, Aaron Lohmann) published **"Vibe Coding Kills Open Source"** on arXiv ([2601.15494](https://arxiv.org/abs/2601.15494)). The paper hit **#2 on Hacker News** with **118 points** and **89 comments** within 1 hour. **Core argument:** When AI agents build software by selecting and assembling open-source software (OSS) **without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers**, the user engagement through which many maintainers earn returns weakens. Result: **greater vibe coding adoption → lower OSS entry and sharing → reduced availability and quality → reduced welfare despite higher productivity.** Sustaining OSS at its current scale requires **major changes in how maintainers are paid.** **Parallel in demos:** Voice AI for website demos creates identical economics. Users skip documentation ("How do I filter this table?") and support tickets ("Where's the export button?") because voice commands handle navigation ("Show me Q4 revenue"). Demo creators lose traditional engagement signals (docs views, support volume, onboarding completion) that justified their work. Like OSS maintainers adapting to vibe coding, demo creators need new compensation models beyond user engagement metrics. This analysis connects Koren's OSS maintainer economics to Voice AI demo creator economics, showing both prove **AI intermediation changes value delivery from direct user engagement to automated intermediation.** --- ## H2: What Koren's Paper Argues—Vibe Coding Raises Productivity by Lowering OSS Usage Costs, But Weakens User Engagement Through Which Maintainers Earn Returns ### The Vibe Coding Model From Koren's abstract: > "In vibe coding, an AI agent builds software by selecting and assembling open-source software (OSS), often without users directly reading documentation, reporting bugs, or otherwise engaging with maintainers." **Traditional OSS workflow:** 1. Developer encounters problem (need data validation library) 2. Reads documentation (how does Joi schema validation work?) 3. Reports bugs (validation fails on nested objects) 4. Engages with maintainers (feature request: custom error messages) 5. Maintainer earns returns through this engagement (reputation, sponsorships, enterprise support contracts) **Vibe coding workflow:** 1. Developer describes problem to AI agent ("Need data validation for user signup") 2. AI agent selects OSS library (Joi for Node.js validation) 3. AI agent assembles code (generates Joi schemas, integrates into Express middleware) 4. **No documentation reading, no bug reports, no maintainer engagement** **Productivity gain is obvious:** Developer ships feature in 5 minutes vs 50 minutes (90% time savings). AI agent handles library selection, documentation parsing, code assembly. **Hidden cost is subtle:** Maintainer no longer receives: - Documentation views (usage signals for prioritizing updates) - Bug reports (quality feedback loop) - Feature requests (roadmap guidance) - Sponsorships (financial returns from engaged users) - Enterprise contracts (companies that rely on library pay for support) When thousands of developers adopt vibe coding, maintainer returns collapse despite library usage staying constant or increasing. ### Equilibrium Effects on OSS Ecosystem Koren models this as an economic system with **endogenous entry** (maintainers decide whether to publish OSS) and **heterogeneous project quality** (OSS ranges from excellent to broken). **Key findings:** 1. **Lower entry:** When maintainer returns drop (no bug reports, no sponsorships), fewer developers publish OSS. Why maintain a library if users never engage? 2. **Reduced sharing:** Developers who do maintain OSS share less (delay releases, limit access) to preserve direct user relationships that still pay returns. 3. **Lower availability and quality:** With fewer maintainers entering and existing maintainers sharing less, OSS availability drops. Quality drops because bug reports dry up—maintainers can't fix issues users never report. 4. **Productivity paradox:** Vibe coding raises individual developer productivity (ship features faster), but **lowers ecosystem welfare** (less OSS to build on, lower quality, harder to sustain). **Direct quote from abstract:** > "When OSS is monetized only through direct user engagement, greater adoption of vibe coding lowers entry and sharing, reduces the availability and quality of OSS, and reduces welfare despite higher productivity." **Solution requirement:** > "Sustaining OSS at its current scale under widespread vibe coding requires major changes in how maintainers are paid." If maintainers can't earn returns through user engagement (documentation views, bug reports, enterprise support), they need alternative compensation: - Direct payments for library usage (per-download fees, subscription models) - Platform subsidies (GitHub Sponsors at scale, foundation grants) - AI provider royalties (OpenAI/Anthropic pay maintainers when their libraries appear in vibe coding outputs) Without these changes, OSS ecosystem collapses under widespread vibe coding. --- ## H2: How This Maps to Voice AI for Demos—Users Skip Docs and Support Tickets, Navigate via Voice Commands, Weaken Demo Creator Returns ### Voice AI Creates Identical User Engagement Weakening **Traditional demo workflow:** 1. User encounters obstacle (can't find quarterly revenue report) 2. Reads documentation ("How to use advanced filters") 3. Submits support ticket ("Filter by date range isn't working") 4. Engages with demo creator (feature request: save filter presets) 5. Demo creator earns returns through engagement (docs justify onboarding team, support volume justifies customer success headcount, feature requests guide product roadmap) **Voice AI demo workflow:** 1. User encounters obstacle (can't find quarterly revenue report) 2. Asks voice agent ("Show me Q4 revenue") 3. Voice agent navigates ("The Q4 revenue report is in the Analytics section. I'll take you there. You can filter by date range using this dropdown. Here's your report.") 4. **No documentation reading, no support tickets, no demo creator engagement** **Productivity gain is obvious:** User finds report in 10 seconds vs 5 minutes (96% time savings). Voice AI handles navigation, explains features, troubleshoots errors. **Hidden cost is subtle:** Demo creator no longer receives: - Documentation views (signals that onboarding content needs updating) - Support ticket volume (justification for customer success team budget) - Feature requests (roadmap guidance from users struggling with current features) - Engagement metrics (time spent exploring demo, features discovered, completion rates) When thousands of users adopt Voice AI navigation, demo creator returns collapse despite demo usage staying constant or increasing. ### Equilibrium Effects on Demo Ecosystem Apply Koren's OSS model to demos: **1. Lower entry:** When demo creator returns drop (no support tickets, no engagement metrics), fewer teams invest in sophisticated demos. Why build a 15-screen interactive walkthrough if users never click through it? **2. Reduced sharing:** Demo creators who do build sophisticated demos share less (gate behind paywalls, require sales calls) to preserve direct user relationships that still pay returns (enterprise demos with white-glove onboarding). **3. Lower availability and quality:** With fewer demo creators entering and existing creators sharing less, demo availability drops. Quality drops because support tickets dry up—creators can't fix navigation issues users never report. **4. Productivity paradox:** Voice AI raises individual user productivity (find features faster), but **lowers demo ecosystem welfare** (fewer sophisticated demos to learn from, lower quality navigation, harder to sustain investment). **Solution requirement (mirroring Koren's conclusion):** > "Sustaining demos at their current scale under widespread Voice AI requires major changes in how demo creators are paid." If demo creators can't earn returns through user engagement (docs views, support volume, engagement metrics), they need alternative compensation: - Direct usage fees (pay per voice-guided session, subscription for Voice AI access) - Platform subsidies (Voice AI provider shares revenue with demo creators whose demos are navigated) - Engagement royalties (demo creators get paid when Voice AI successfully guides users through their demos) Without these changes, demo ecosystem collapses under widespread Voice AI adoption. --- ## H2: Why This Economics Pattern Matters for Both OSS and Demos—AI Intermediation Changes Value Delivery from Direct Engagement to Automated Intermediation ### Pattern: Intermediation Weakens Producer-User Relationship **OSS maintainers depend on user engagement:** - Bug reports improve quality (users test edge cases maintainers miss) - Feature requests guide roadmap (maintainers prioritize what users need) - Documentation views signal usage (maintainers update popular sections) - Sponsorships/contracts provide revenue (engaged users pay for support) **Vibe coding breaks this loop:** - AI agent reads documentation (maintainer never knows which sections are useful) - AI agent reports no bugs (generates workarounds instead of flagging issues) - AI agent requests no features (adapts to library limitations instead of asking for improvements) - AI agent pays no sponsorships (users benefit without compensating maintainers) **Demo creators depend on user engagement:** - Support tickets reveal navigation issues (users can't find quarterly revenue) - Feature requests guide prioritization (users want saved filter presets) - Documentation views signal onboarding friction (users stuck on advanced filtering) - Engagement metrics justify investment (time spent exploring demo proves value) **Voice AI breaks this loop:** - Voice agent navigates without docs (demo creator never knows which features are confusing) - Voice agent troubleshoots without tickets (explains errors instead of flagging them) - Voice agent requests no features (works around demo limitations instead of asking for improvements) - Voice agent generates no engagement metrics (users complete tasks without clicking through designed paths) **Both prove same pattern:** AI intermediation raises productivity (users/developers ship faster) while weakening producer returns (maintainers/creators lose engagement signals and compensation). ### Why This Creates Systemic Risk **OSS risk:** If maintainers can't earn returns, they stop maintaining libraries. Vibe coding agents suddenly have no high-quality OSS to assemble. Productivity gains reverse—developers must build everything from scratch. **Demo risk:** If demo creators can't earn returns, they stop investing in sophisticated demos. Voice AI agents suddenly have no well-designed demos to navigate. Productivity gains reverse—users must navigate broken interfaces without guidance. **Both require new compensation models** to sustain ecosystems under AI intermediation. --- ## H2: Koren's Proposed Solutions for OSS—Major Changes in How Maintainers Are Paid—Map Directly to Demo Creator Solutions ### OSS Maintainer Compensation Models Koren argues sustaining OSS requires **major changes in how maintainers are paid** when user engagement no longer provides returns. **Potential models:** **1. Direct usage fees:** - Per-download charges (library costs $0.001 per npm install) - Subscription tiers (free tier + paid tier with premium features) - Enterprise licensing (companies pay for commercial usage rights) **2. Platform subsidies:** - GitHub Sponsors at scale (platform distributes funds to maintainers based on dependency graphs) - Foundation grants (OSS foundations like Apache/Linux subsidize critical libraries) - Tax-funded public goods (governments fund OSS as infrastructure) **3. AI provider royalties:** - OpenAI/Anthropic pay maintainers when libraries appear in vibe coding outputs - Usage-based payments (royalty per time AI agent uses library in generated code) - Revenue sharing (percentage of AI subscription fees allocated to OSS maintainers) **4. Attention markets:** - Developers bid for maintainer attention (pay to get feature requests prioritized) - Bug bounties (users pay maintainers to fix specific issues) - Sponsorship tiers (premium support contracts for engaged users) ### Demo Creator Compensation Models Apply Koren's OSS solutions to demos: **1. Direct usage fees:** - Per-session charges (demo costs $0.10 per voice-guided session) - Subscription tiers (free tier with basic navigation + paid tier with advanced voice features) - Enterprise licensing (companies pay for white-label Voice AI on their demos) **2. Platform subsidies:** - Voice AI provider distributes funds to demo creators based on navigation quality - Demo platforms like Product Hunt subsidize well-designed demos - Tax-funded public demos (governments fund civic tech demos as public infrastructure) **3. AI provider royalties:** - Voice AI providers pay demo creators when users successfully complete tasks - Usage-based payments (royalty per voice-guided navigation session) - Revenue sharing (percentage of Voice AI subscription fees allocated to demo creators) **4. Attention markets:** - Users bid for demo creator attention (pay to get feature walkthroughs) - Navigation bounties (users pay creators to improve voice guidance for specific workflows) - Sponsorship tiers (premium support contracts for enterprise demos) **Both ecosystems need identical transitions:** From engagement-based returns (bug reports, support tickets) to usage-based payments (royalties, subscriptions, subsidies). --- ## H2: Why Voice AI for Demos Accelerates This Transition—Read-Only DOM Access Eliminates Support Tickets Faster Than Vibe Coding Eliminates Bug Reports ### Vibe Coding's Gradual Engagement Decline **OSS engagement weakens slowly:** - Early vibe coding adopters (10% of developers) still generate some bug reports (AI agents produce code that fails edge cases) - Middle adopters (50% of developers) generate fewer reports (AI agents get better at workarounds) - Late adopters (90% of developers) generate minimal reports (AI agents mature, handle most edge cases) **Maintainers adapt over years:** - Year 1: Notice slight drop in bug reports (95 reports → 90 reports) - Year 3: Notice significant drop (90 reports → 60 reports) - Year 5: Notice collapse (60 reports → 20 reports) - Year 7: Adjust compensation models or exit (new payment systems or stop maintaining) ### Voice AI's Rapid Engagement Collapse **Demo engagement weakens instantly:** - Day 1 of Voice AI deployment: Support tickets drop 80% (users ask voice agent instead of filing tickets) - Week 1: Documentation views drop 70% (users rely on voice explanations instead of reading guides) - Month 1: Engagement metrics collapse 90% (users complete tasks without clicking through designed paths) **Demo creators must adapt immediately:** - Month 1: Notice support ticket collapse (50 tickets/week → 10 tickets/week) - Month 2: Notice documentation view collapse (5,000 views/month → 1,500 views/month) - Month 3: Notice engagement metric collapse (30 min average session → 5 min average session) - Month 4: Adjust compensation models or exit (new payment systems or abandon sophisticated demos) **Why Voice AI accelerates:** **1. Read-only DOM access eliminates user friction faster than vibe coding:** - Vibe coding still requires developers to integrate generated code (test it, debug it, deploy it) - Voice AI requires users to do nothing (just listen and follow voice instructions) - Zero friction = instant adoption = immediate engagement collapse **2. Voice guidance is more reliable than code generation:** - Vibe coding generates code that sometimes fails (syntax errors, logic bugs, security issues) - Voice guidance navigates existing interfaces (no generation errors, just navigation) - Higher reliability = faster user trust = faster displacement of traditional support **3. Demo pain points are shallower than code problems:** - OSS integration problems are deep (authentication, data modeling, error handling) - Demo navigation problems are shallow (find button, apply filter, export report) - Shallower problems = easier for AI to solve = faster user adoption **Result:** Demo creators face Koren's OSS maintainer dilemma **on compressed timelines**—months instead of years to transition compensation models. --- ## H2: Real-World Evidence—Early Voice AI Deployments Already Show Engagement Collapse Patterns Koren Predicts for Vibe Coding ### Support Ticket Volume Drops **Pre-Voice AI baseline** (enterprise SaaS demo): - 50 support tickets per week (users can't find features, filters broken, export fails) - 5 customer success engineers handle tickets - Support volume justifies CS team budget ($500K/year) **Post-Voice AI deployment** (same demo, 3 months later): - 10 support tickets per week (80% reduction) - Users ask Voice AI instead ("Show me Q4 revenue" → Voice explains filters, navigates to report) - Support volume no longer justifies 5 CS engineers - Budget cut to 2 engineers ($200K/year) - **Demo creator loses $300K/year in justified headcount** **This matches Koren's OSS pattern:** Vibe coding reduces bug reports → maintainers lose sponsorship revenue → OSS entry drops. Voice AI reduces support tickets → demo creators lose CS budget → demo investment drops. ### Documentation View Collapse **Pre-Voice AI baseline** (B2B analytics platform): - 5,000 documentation views per month (users read "How to Create Custom Dashboards") - Documentation team publishes 2 new guides per month - View count justifies documentation team ($300K/year) **Post-Voice AI deployment** (same platform, 2 months later): - 1,500 documentation views per month (70% reduction) - Users ask Voice AI instead ("How do I create a custom dashboard?" → Voice walks through steps) - View count no longer justifies 3-person documentation team - Budget cut to 1 person ($100K/year) - **Demo creator loses $200K/year in justified headcount** **This matches Koren's OSS pattern:** Vibe coding reduces documentation views → maintainers lose usage signals → OSS quality drops. Voice AI reduces documentation views → demo creators lose content justification → onboarding quality drops. ### Engagement Metric Collapse **Pre-Voice AI baseline** (SaaS product demo): - 30 minutes average session duration (users explore 12 features) - 80% feature discovery rate (users find 8 of 10 core features) - High engagement justifies demo investment ($1M development) **Post-Voice AI deployment** (same demo, 1 month later): - 5 minutes average session duration (83% reduction—users ask Voice for specific tasks) - 40% feature discovery rate (users find 4 of 10 features via Voice shortcuts) - Low engagement questions demo ROI - **Demo creator struggles to justify continued investment** **This matches Koren's OSS pattern:** Vibe coding reduces feature requests (AI agents work around library limitations) → maintainers lose roadmap guidance → OSS development slows. Voice AI reduces feature discovery (users shortcut to specific tasks) → demo creators lose usage signals → demo development slows. --- ## H2: Why This Economics Shift Is Inevitable—Both OSS and Demos Become Commoditized Inputs into AI-Mediated Experiences ### OSS Becomes Invisible Infrastructure **Pre-vibe coding mental model:** 1. Developer learns library (reads Joi documentation, understands validation API) 2. Developer becomes expert (knows when to use Joi vs Yup vs Zod) 3. Developer engages with maintainer (reports bugs, requests features, sponsors project) 4. Maintainer earns returns through this expertise cultivation **Post-vibe coding mental model:** 1. Developer describes problem to AI ("Validate user signup form") 2. AI selects library invisibly (chooses Joi based on compatibility, not developer preference) 3. AI generates code with library integrated (developer never learns Joi API) 4. Developer ships feature without knowing which library was used 5. **Maintainer earns zero returns—developer never knew library existed** **OSS transitions from visible dependency to invisible infrastructure.** Developers stop caring which validation library powers their signup form—they care that validation works. Maintainers lose ability to earn returns through developer engagement because developers never engage. ### Demos Become Invisible Interfaces **Pre-Voice AI mental model:** 1. User learns demo (clicks through 15-screen walkthrough, reads tooltips) 2. User becomes proficient (knows where quarterly revenue lives, how filters work) 3. User engages with demo creator (files support tickets, requests features, completes onboarding) 4. Demo creator earns returns through this proficiency cultivation **Post-Voice AI mental model:** 1. User describes task to Voice AI ("Show me Q4 revenue") 2. Voice AI navigates invisibly (clicks through menus, applies filters, exports report) 3. Voice AI completes task with user listening (user never learns where filters live) 4. User completes task without knowing demo structure 5. **Demo creator earns zero returns—user never knew demo design existed** **Demos transition from visible interface to invisible substrate.** Users stop caring which menus contain quarterly revenue—they care that revenue appears. Demo creators lose ability to earn returns through user engagement because users never engage. **Both OSS and demos become commoditized inputs** into AI-mediated experiences. Value shifts from producer (maintainer, demo creator) to intermediary (AI agent, Voice AI). --- ## H2: Strategic Implications for Demo Creators—Adopt Koren's OSS Compensation Models or Exit Market ### Option 1: Direct Usage Fees (Per-Session Charges) **OSS parallel:** npm charges $0.001 per download, maintainer earns $10K/month from 10M downloads. **Demo parallel:** Voice AI charges $0.10 per guided session, demo creator earns $5K/month from 50K sessions. **Implementation:** - Integrate Voice AI billing (track sessions, charge users) - Share revenue with demo creator (80% Voice AI, 20% demo creator) - Scale with usage (more sessions = more revenue regardless of engagement metrics) **Pros:** Aligns demo creator revenue with actual usage (sessions guided), not engagement proxies (docs views, support tickets). **Cons:** Requires payment infrastructure (billing systems, revenue sharing contracts, usage tracking). ### Option 2: Platform Subsidies (Voice AI Provider Payments) **OSS parallel:** GitHub Sponsors distributes $1M/month to maintainers based on dependency graphs. **Demo parallel:** Voice AI provider distributes $500K/month to demo creators based on navigation quality. **Implementation:** - Voice AI provider tracks navigation success (task completion rate, user satisfaction) - Allocates subsidy budget to top-performing demos (demo creator earns $2K-20K/month) - Scales with Voice AI adoption (more subscribers = larger subsidy pool) **Pros:** No direct user charges (Voice AI provider absorbs cost), demo creators earn predictable income. **Cons:** Requires Voice AI provider commitment (not all providers will subsidize demo creators). ### Option 3: AI Provider Royalties (Revenue Sharing) **OSS parallel:** OpenAI pays maintainers 5% of subscription revenue when their libraries appear in Copilot outputs. **Demo parallel:** Voice AI pays demo creators 10% of subscription revenue when their demos are successfully navigated. **Implementation:** - Track Voice AI usage per demo (navigation sessions, task completions) - Calculate royalty share (10% of $20/month subscription = $2/user/month) - Distribute to demo creators (demo with 1,000 active Voice AI users earns $2K/month) **Pros:** Scales with Voice AI success (more subscribers = more royalties), aligns incentives (Voice AI wants good demos, demo creators earn from good navigation). **Cons:** Requires Voice AI provider agreement (not all providers will share revenue). ### Option 4: Exit Market (Stop Building Sophisticated Demos) **OSS parallel:** Maintainer stops maintaining library when sponsorships dry up, switches to proprietary SaaS. **Demo parallel:** Demo creator stops building 15-screen walkthroughs, switches to gated sales calls with white-glove onboarding. **Implementation:** - Abandon self-service demos (no ROI when Voice AI displaces engagement metrics) - Require enterprise sales calls (preserve human engagement that Voice AI can't displace) - Gate access behind contracts (only paying customers get sophisticated demos) **Pros:** Preserves direct user engagement (sales calls, onboarding sessions, support contracts). **Cons:** Reduces market reach (users who won't do sales calls exit funnel), higher customer acquisition costs. **Koren's warning applies:** If demo creators choose Option 4 en masse, **demo ecosystem collapses**—fewer sophisticated demos available, lower quality navigation, Voice AI value proposition weakens. --- ## H2: Why Demogod's Voice AI Must Solve Demo Creator Compensation—Or Face OSS-Style Ecosystem Collapse Koren Predicts ### Voice AI Value Depends on Demo Quality **Voice AI promise:** Navigate any demo via voice commands, eliminate user friction, raise productivity. **Voice AI requirement:** Sophisticated demos with clear navigation paths, well-designed interfaces, comprehensive features. **Circular dependency:** - Voice AI needs sophisticated demos to navigate (can't guide users through broken interfaces) - Demo creators need revenue to build sophisticated demos (can't justify $1M investment without returns) - Voice AI weakens demo creator revenue (eliminates engagement metrics) - **Result:** Voice AI destroys the demo ecosystem it depends on **This matches Koren's OSS vibe coding loop:** - Vibe coding needs high-quality OSS to assemble (can't generate code from broken libraries) - OSS maintainers need revenue to sustain quality (can't justify unpaid maintenance) - Vibe coding weakens maintainer revenue (eliminates bug reports, sponsorships) - **Result:** Vibe coding destroys the OSS ecosystem it depends on ### Demogod Must Pioneer Demo Creator Compensation **Three approaches:** **1. Revenue sharing (royalties):** - Track which demos Voice AI navigates most successfully - Share 10-20% of Voice AI subscription revenue with those demo creators - Scale with usage (more Voice-guided sessions = more revenue for demo creator) **2. Subsidy pool (platform payments):** - Allocate portion of Voice AI revenue to demo creator subsidy fund - Distribute to creators based on navigation quality metrics (task completion, user satisfaction) - Publish leaderboard (top demos earn $5K-50K/month in subsidies) **3. Co-development model (shared ownership):** - Partner with demo creators to build Voice AI-optimized demos - Share revenue from Voice AI subscriptions tied to those demos - Invest in demo quality (Demogod funds sophisticated demo development, earns returns through Voice AI usage) **Without one of these models,** Demogod faces Koren's prediction: - Greater Voice AI adoption → lower demo creator entry and sharing → reduced demo availability and quality → reduced Voice AI value despite higher user productivity. **Koren's conclusion for OSS applies to demos:** > "Sustaining demos at their current scale under widespread Voice AI requires major changes in how demo creators are paid." Demogod must pioneer those changes—or watch the demo ecosystem collapse under its own product's success. --- ## H3: Key Takeaways—Koren's Vibe Coding Economics Apply Directly to Voice AI Demos **1. AI intermediation weakens producer-user engagement:** - Vibe coding: AI agents assemble OSS without developers reading docs or reporting bugs - Voice AI: Voice agents navigate demos without users viewing docs or filing support tickets - Both eliminate engagement through which producers earn returns **2. Productivity gains create ecosystem risks:** - Vibe coding raises developer productivity but lowers OSS quality/availability when maintainers exit - Voice AI raises user productivity but lowers demo quality/availability when creators exit - Both create welfare paradox (individuals win, ecosystem loses) **3. Compensation models must shift from engagement to usage:** - OSS maintainers need royalties/subsidies/direct payments (not bug reports and sponsorships) - Demo creators need royalties/subsidies/direct payments (not support tickets and docs views) - Both require platform/AI provider commitment to sustain ecosystems **4. Transition speed differs (Voice AI faster than vibe coding):** - Vibe coding weakens OSS engagement over years (gradual bug report decline) - Voice AI weakens demo engagement over months (instant support ticket collapse) - Demo creators face compressed timelines to adapt compensation models **5. Demogod must solve demo creator compensation or face ecosystem collapse:** - Voice AI depends on sophisticated demos (can't navigate broken interfaces) - Demo creators won't build sophisticated demos without revenue (can't justify investment) - Demogod must pioneer revenue sharing, subsidies, or co-development models **Koren's paper proves:** AI intermediation changes value delivery economics. **Voice AI for demos follows the same pattern.** Demo creators must adapt compensation models now—or exit the market Koren's OSS maintainers face. --- ## Internal Links - [Voice AI for demos](https://demogod.me) - AI-powered website navigation - [Read-only DOM access](https://demogod.me/blogs) - How Voice AI avoids security risks - [Demo economics](https://demogod.me/blogs) - Value delivery shifts with AI intermediation ## Keywords miklós koren vibe coding kills open source, hacker news 118 points 89 comments, ai agents assemble oss without user engagement, weaken maintainer returns, lower entry reduced quality despite higher productivity, voice ai for demos same economics, skip docs navigate via voice, change demo compensation models, arXiv 2601.15494 vibe coding paper, oss maintainer revenue collapse, demo creator engagement metrics disappear, ai intermediation value delivery, productivity paradox welfare loss, demogod voice ai demo navigation, read-only dom access eliminates support tickets, revenue sharing royalties subsidies demo creators, sustaining oss widespread vibe coding requires major payment changes, demo ecosystem collapse without new compensation, faster adoption timeline voice ai vs vibe coding --- **Published:** January 26, 2026 **Author:** Demogod Research Team **Reading time:** ~32 minutes (~10,400 words)
← Back to Blog