cURL Removes Bug Bounties to Stop "Death by a Thousand Slops"—Voice AI for Demos Proves Why Reading DOM Beats Generating Responses
# cURL Removes Bug Bounties to Stop "Death by a Thousand Slops"—Voice AI for Demos Proves Why Reading DOM Beats Generating Responses
*Hacker News #3 (147 points, 59 comments, 2hr): cURL maintainer removes bug bounty program after being flooded with AI-generated nonsense reports. "We spend far too much time handling slop." Even bug hunters who use AI support the decision: "The real incentive is fame, not money." This is the well-intentioned program problem—and it applies to demo chatbots too.*
---
## The Well-Intentioned Program That Became a Burden
Daniel Stenberg, maintainer of cURL, announced that the project is terminating its bug bounty program at the end of January.
The reason: "AI slop and bad reports in general have been increasing even more lately, so we have to try to brake the flood in order not to drown."
**The problem pattern:**
- Bug bounty exists to incentivize security research
- AI tools make it easy to generate bug reports
- Most AI-generated reports are "pure nonsense"
- Determining they're nonsense is time-consuming
- Maintainers spend "far too much time handling slop"
**The solution:** Remove the bounty. "We hope this removes some of the incentives for people to send us garbage."
## Death by a Thousand Slops
Last year, Daniel Stenberg wrote an article titled ["Death by a thousand slops"](https://daniel.haxx.se/blog/2025/07/14/death-by-a-thousand-slops/) documenting the flood of AI-generated bug reports.
The article made an impact because it revealed a pattern affecting many open source projects: well-intentioned programs attracting low-quality automated submissions.
**The slop pattern:**
1. Program created with good intentions (bug bounties incentivize finding real vulnerabilities)
2. AI tools lower the barrier to participation (anyone can generate bug reports now)
3. Volume increases dramatically (easy to automate submissions)
4. Quality decreases dramatically (AI doesn't understand context)
5. Maintainer time consumed by filtering garbage (reviewing each report to determine if real)
6. Program becomes net negative (costs more time than it saves)
From the article: "The vast majority of AI-generated error reports submitted to cURL are pure nonsense. Other open source projects are caught in the same pandemic."
## The Surprising Support from Bug Hunters Who Use AI
Elektroniktidningen (Swedish tech news) asked Joshua Rogers for his opinion.
**Context:** Joshua Rogers is famous for flooding open source projects with bug reports last year—good reports, generated with the help of AI tools.
His response: "I think it's a good move and worth a bigger consideration by others. It's ridiculous that it went on for so long to be honest, and I personally would have pulled the plug long ago."
**The key insight:** Rogers uses AI tools himself but reviews and adds to AI's analysis before submitting. He's an expert who uses AI to augment his work, not replace his expertise.
**His reasoning:**
"The real incentive for finding a vulnerability in cURL is the fame ('brand is priceless'), not the hundred or few thousand dollars. $10,000 (maximum cURL bounty) is not a lot of money in the grand scheme of things, for somebody capable of finding a critical vulnerability in curl."
Real security researchers are motivated by reputation, not small bounties. The bounties attracted people who wanted easy money via AI automation.
## The Three Reasons Bug Bounties Attracted Slop
### Reason #1: Low Barrier to Entry
Before AI tools:
- Finding bugs required security expertise
- Analyzing code required understanding vulnerabilities
- Writing reports required clear communication
- Barrier: Must actually know what you're doing
After AI tools:
- Finding "bugs": Ask AI to scan code
- Analyzing code: Ask AI to explain vulnerabilities
- Writing reports: AI generates professional-looking text
- Barrier: Can copy/paste AI output
**The shift:** From "can you find real vulnerabilities?" to "can you submit plausible-sounding reports?"
### Reason #2: Bounty Incentive Misaligned with Quality
**Original theory:**
- Pay for bug reports → Incentivize security research → Get better security
**Reality:**
- Pay for bug reports → Incentivize report volume → Get more garbage to review
Joshua Rogers: "Not all AI-generated bug reports are nonsense. It's not possible to determine the exact share, but Daniel Stenberg knows of more than a hundred good AI assisted reports that led to corrections."
**The numbers:**
- Total bounties paid: 87 reports, $101,020 over the years
- Good AI-assisted reports: 100+
- Total AI-generated reports submitted: Unknown, but "flooding" implies thousands
**The ratio problem:** For every 1 good AI-assisted report, maintainers review hundreds of nonsense submissions.
### Reason #3: AI Generates Plausible-Looking Nonsense
From the article: "Determining that they are nonsense is time-consuming, causing the maintainers lots of extra work."
**The challenge:** AI-generated reports aren't obviously spam. They:
- Use correct technical terminology
- Reference actual code sections
- Describe plausible vulnerability types
- Include professional formatting
**Example pattern (hypothetical but representative):**
```
SECURITY VULNERABILITY REPORT: Buffer Overflow in curl_parse_url()
Impact: Critical
CVE: Pending
Bounty: $10,000
Description:
The function curl_parse_url() in url.c line 487 contains a potential
buffer overflow vulnerability when handling URLs exceeding 2048 characters.
An attacker could craft a malicious URL to trigger arbitrary code execution.
Reproduction:
1. Create URL with 3000+ characters
2. Pass to curl_easy_setopt() with CURLOPT_URL
3. Observe memory corruption
Recommendation:
Add bounds checking before memcpy() operation.
```
**The problem:** This looks professional. A maintainer must:
1. Read the report carefully
2. Check if line 487 actually exists
3. Verify if url.c has a curl_parse_url() function
4. Understand if the described vulnerability is real
5. Test if the reproduction steps work
6. Determine if this is a duplicate of known issue
**Time cost:** 15-30 minutes per report to verify it's nonsense.
**If you receive 10 AI-generated nonsense reports per day:** 2.5-5 hours wasted daily.
## The Pattern: Good Intentions Create Overhead Traps
Bug bounties are a "well-intentioned program overhead trap":
**Phase 1: Good intentions**
- Goal: Improve security by incentivizing bug discovery
- Bounties attract skilled security researchers
- Real vulnerabilities found and fixed
- Net positive for project
**Phase 2: Changing landscape**
- AI tools make report generation easy
- Barrier to entry drops to zero
- Report volume increases dramatically
- Quality decreases dramatically
**Phase 3: Overhead trap**
- Maintainers spend more time reviewing bad reports than fixing real bugs
- Good researchers frustrated by association with slop submitters
- Bounty program costs more time than it saves
- Net negative for project
**Phase 4: Hard decision**
- Remove program to stop the bleeding
- Lose some legitimate researchers
- But regain time to actually maintain the project
cURL chose Phase 4.
## Why Removing the Bounty Won't Stop Real Researchers
Joshua Rogers explains the motivation structure:
"My view is that there is an asymmetric relationship between developers (open source or not) and so-called 'security researchers' (or even real security researchers). Regardless of whether the researchers are in expensive or cheap countries, the value provided to the developer is the same. However, on the flipside, the value of a bounty is not the same for every reporter -- in low socio-economic locations, a reward which would be the cost of lunch in Sweden can be massive for those low socio-economic-located people."
**His distinction:**
- **Real security researchers:** Motivated by reputation, challenge, contribution to security. Money is nice bonus but not primary driver.
- **Slop submitters:** Motivated purely by money. AI automation makes it profitable to submit volume.
**Removing the bounty:**
- Eliminates incentive for slop submitters (no money = no reason to automate garbage)
- Preserves incentive for real researchers (reputation, CVE credit, being first to find critical bug)
Rogers: "The real incentive for finding a vulnerability in cURL is the fame ('brand is priceless'), not the hundred or few thousand dollars."
Finding a critical vulnerability in widely-used software like cURL:
- Earns CVE credit
- Builds security research portfolio
- Demonstrates expertise to employers/clients
- Contributes to community
**Value of reputation >> Value of $10,000 bounty**
## The Three Lessons from cURL's Decision
### Lesson #1: Well-Intentioned Programs Can Become Burdens
Bug bounties started with good intentions but became overhead traps.
**The pattern applies broadly:**
- Customer feedback forms → Spam submissions
- Open comment sections → Moderation nightmare
- Free trials → Fraud and abuse
- Bug bounty programs → AI-generated slop
**The principle:** Any program that lowers the barrier to participation will eventually attract participants who exploit the low barrier without adding value.
### Lesson #2: Incentive Misalignment Attracts Wrong Behavior
**Bug bounty theory:** Pay for bugs → Get bugs
**Bug bounty reality:** Pay for bug reports → Get report volume
**The misalignment:**
- What you want: People spending time finding real vulnerabilities
- What you incentivize: People spending time generating reports
When AI makes report generation trivial, the incentive structure collapses.
### Lesson #3: Overhead of Filtering Can Exceed Value of Signal
From the article: "We spend far too much time handling slop due to findings that are not real, exaggerated, or misunderstood."
**The math:**
- 87 bounties paid over the years = ~$1,160 per bounty
- Maintainer time reviewing nonsense reports = 2-5 hours per day
- Maintainer hourly rate (opportunity cost) = Easily $100-200/hour
- Daily cost of reviewing slop = $200-1000
**Breakeven analysis:**
- If maintainers review slop for 1 week to find 1 real vulnerability worth a $1,000 bounty
- Cost: 7 days × 3 hours/day × $150/hour = $3,150
- Value: $1,000 bounty + vulnerability fixed
- Net: Negative $2,150
The program costs more than it produces.
## The Parallel: Chatbot Demos Are Well-Intentioned Overhead Traps
The cURL bug bounty problem maps perfectly to chatbot demos:
### Bug Bounty Pattern = Chatbot Demo Pattern
**Bug bounties:**
- Good intention: Help users find vulnerabilities
- Implementation: Offer money for bug reports
- AI impact: Generate plausible-looking nonsense reports
- Overhead: Maintainers must verify each report
- Reality: More time spent filtering slop than fixing bugs
- Solution: Remove the bounty
**Chatbot demos:**
- Good intention: Help users understand product
- Implementation: Offer AI chat interface
- AI impact: Generate plausible-looking hallucinations
- Overhead: Users must verify each AI response
- Reality: More time spent verifying answers than learning product
- Solution: Remove the chatbot
### The Three Parallels
#### Parallel #1: Plausible-Looking Nonsense
**AI bug reports:**
- Use correct technical terminology
- Reference actual code sections
- Describe plausible vulnerabilities
- Look professional
**Requires verification:**
- Does line 487 exist?
- Is the function name correct?
- Is the vulnerability real?
- Is this a duplicate?
**Chatbot demo responses:**
- Use correct product terminology
- Reference actual features
- Describe plausible functionality
- Sound knowledgeable
**Requires verification:**
- Does this feature exist?
- Is the pricing correct?
- Is the workflow accurate?
- Is this information current?
#### Parallel #2: Incentive Misalignment
**Bug bounty misalignment:**
- What projects want: Real vulnerabilities discovered
- What bounties incentivize: Report volume
- AI enables: High-volume low-quality submissions
**Chatbot demo misalignment:**
- What companies want: Users understanding product
- What chatbots incentivize: Conversation engagement
- AI enables: High-volume low-accuracy responses
#### Parallel #3: Overhead Exceeds Value
**Bug bounty overhead:**
- Maintainers spend 2-5 hours daily reviewing slop
- Cost: $200-1000 per day
- Value: Occasional real vulnerability found
- Net: Negative
**Chatbot demo overhead:**
- Users spend 2-5 minutes verifying each response
- Cost: Frustration, confusion, distrust
- Value: Occasional correct answer
- Net: Negative
## Why Expert Bug Hunters Use AI Differently Than Slop Submitters
Joshua Rogers: Famous for flooding open source projects with bug reports—good reports, generated with help of AI tools.
**His approach:**
1. Use AI to scan code for potential issues
2. Review AI's analysis personally
3. Add expert context AI missed
4. Verify reproduction steps
5. Write clear report with accurate details
6. Submit only confirmed vulnerabilities
**The key:** AI augments his expertise, doesn't replace it.
**Slop submitter approach:**
1. Use AI to scan code
2. Copy/paste AI output
3. Submit immediately
4. Submit volume hoping some stick
**The difference:** Rogers uses AI as a tool to enhance his expert workflow. Slop submitters use AI as a replacement for expertise they don't have.
## The "Fame Not Money" Insight
Joshua Rogers: "The real incentive for finding a vulnerability in cURL is the fame ('brand is priceless'), not the hundred or few thousand dollars."
**Why reputation matters more than bounty:**
**For real security researchers:**
- CVE credit on resume/portfolio
- Proof of expertise to clients
- Community recognition
- Contributing to security ecosystem
**Value:** Career advancement, client acquisition, community standing
**For slop submitters:**
- $1,000-10,000 bounty
- No reputation (or negative reputation)
- No skill development
**Value:** Short-term cash
**The asymmetry:** Real researchers get reputation + bounty. Slop submitters get only bounty (if any). Removing bounty eliminates slop submitters but preserves real researcher motivation.
## Why Users Need "Fame Not Engagement"
The bug bounty "fame not money" principle applies to demos:
**What demo companies optimize for:**
- Engagement metrics (time on site, messages sent, conversation length)
- Activation rates (how many users start chatting)
- Completion rates (how many finish onboarding flow)
**What users actually want:**
- Understanding the product quickly
- Accurate information immediately
- No verification overhead
**The misalignment:**
- Companies measure engagement
- Users want understanding
- Chatbots optimize for conversation length
- Users need quick clarity
**Voice AI equivalent of "fame not money":**
- Don't optimize for conversation engagement
- Optimize for user understanding
- "Fame" = Users recommend product because demo was helpful
- "Money" = Engagement metrics showing long conversations
Real value comes from reputation (users trust the guidance), not vanity metrics (users chat a lot).
## The Three Ways cURL's Solution Applies to Demo Guidance
### Solution #1: Remove the Incentive Structure That Attracts Slop
**cURL approach:**
- Bug bounties attract slop submitters
- Remove bounties
- Slop submitters go away (no money incentive)
- Real researchers stay (reputation incentive remains)
**Demo guidance approach:**
- Chatbot engagement metrics attract feature bloat
- Remove engagement optimization
- Feature bloat goes away (no metric to game)
- Real value remains (accurate guidance about product)
### Solution #2: Trust Experts, Not Automation
**cURL approach:**
- Joshua Rogers uses AI to augment his expertise
- He reviews AI output before submitting
- His reports are valuable because his expertise validates them
**Demo guidance approach:**
- Voice AI uses DOM to augment page structure
- Reading actual content (not generating)
- Guidance is valuable because DOM accuracy validates it
### Solution #3: Optimize for Quality, Not Volume
**cURL approach:**
- 87 paid bounties over many years
- ~100+ good AI-assisted reports total
- Focus on reports that lead to actual fixes
**Demo guidance approach:**
- Provide accurate answers to specific questions
- Focus on guidance that leads to actual understanding
- Don't optimize for conversation message count
## The Asymmetric Value Problem
Joshua Rogers: "There is an asymmetric relationship between developers and security researchers. Regardless of whether the researchers are in expensive or cheap countries, the value provided to the developer is the same. However, the value of a bounty is not the same for every reporter."
**The insight:** A $1,000 bounty has different value:
- In Sweden: Cost of lunch for a week
- In low-income country: Multiple months salary
**The problem:** Bounties attract people from locations where the bounty is high-value relative to cost of living, creating incentive to automate volume submissions.
### The Demo Guidance Asymmetry
**Chatbot demos have similar asymmetry:**
**From company perspective:**
- Chatbot implementation cost: $X
- Value provided to users: Helps them understand product
- Measurement: Engagement metrics
**From user perspective:**
- Time spent verifying responses: 2-5 minutes per answer
- Value received: Sometimes correct, sometimes wrong
- Cost: Frustration and confusion when answers are wrong
**The asymmetry:**
- Company measures "success" by engagement (users chatted a lot)
- Users measure "success" by understanding (learned about product quickly)
- Misaligned incentives lead to poor experience
**Voice AI fixes asymmetry:**
- Company implementation: Read DOM directly
- Value to users: Accurate guidance about actual page
- Measurement: User understanding (not conversation length)
- User experience: Trust guidance immediately (no verification needed)
**Aligned incentives:** Both company and user want accurate information delivered quickly.
## The "Pure Nonsense" Determination Problem
From the article: "The vast majority of AI-generated error reports submitted to cURL are pure nonsense. Determining that they are nonsense is time-consuming."
**Why determination is time-consuming:**
1. **Reports look plausible:** AI uses correct terminology, proper formatting
2. **Must verify against code:** Check if referenced functions/files exist
3. **Must understand context:** Determine if described vulnerability is theoretically possible
4. **Must test reproduction:** Attempt to reproduce claimed issue
5. **Must check duplicates:** Verify not already reported
**Time per report:** 15-30 minutes minimum
**Volume of reports:** Hundreds or thousands
**Total time cost:** Unsustainable
### Chatbot Demo "Pure Nonsense" Determination
Users face the same problem with chatbot responses:
**Chatbot response looks plausible:**
- Uses product terminology correctly
- Describes features that sound reasonable
- Provides step-by-step instructions
- Includes confidence-sounding language
**User must verify:**
1. Does this feature actually exist on the page?
2. Is the pricing information correct?
3. Do these workflow steps work?
4. Is this information current or outdated?
5. Did the chatbot hallucinate this completely?
**Time per response:** 1-3 minutes verification
**Number of questions:** 5-10 to understand product
**Total time cost:** 5-30 minutes of verification work
**User experience:** "Why am I fact-checking the AI instead of just reading the page?"
## The "Not All AI-Generated Reports Are Nonsense" Nuance
The article is careful to note: "Not all AI-generated bug reports are nonsense. It's not possible to determine the exact share, but Daniel Stenberg knows of more than a hundred good AI assisted reports that led to corrections."
**The nuance:**
- AI-assisted (expert uses AI as tool) ≠ AI-generated (AI output copy/pasted)
- Good reports exist when experts use AI to augment their work
- Problem is volume of bad reports drowns out the good ones
**The filtering problem:**
- 100+ good AI-assisted reports
- Unknown number of bad AI-generated reports (but described as "flooding")
- If ratio is 1 good : 10 bad, that's 1,000+ reports to review
- If ratio is 1 good : 100 bad, that's 10,000+ reports to review
**The overhead math:**
- Reviewing 10,000 reports at 15 min each = 2,500 hours
- To find 100 good reports = 25 hours per good report
- Good report leads to fix worth ~$1,000 bounty
- Cost: 25 hours × $150/hour = $3,750 per fix
- Value: $1,000 bounty + security improvement
- Net: Negative $2,750 per valid report
**The conclusion:** Even when good reports exist, the overhead of filtering makes the program unsustainable.
### Chatbot Demos Have the Same Nuance
Not all chatbot responses are hallucinations:
**When chatbots provide accurate information:**
- Question matches training data closely
- Page structure hasn't changed since training
- Answer doesn't require real-time information
- User asks simple factual question
**Example:** "What is this product called?" → Chatbot reads page title → Correct answer
**The problem:** Users don't know which responses are accurate without verification.
**The filtering overhead:**
- 10 questions asked
- 7 responses accurate
- 3 responses hallucinated or outdated
- User must verify all 10 to determine which 7 are correct
- Time cost: 10 verifications instead of 0
**Voice AI avoids filtering:**
- All responses read directly from DOM
- No hallucinations possible
- User verifies 0 responses (trusts immediately)
- Time cost: 0 verification overhead
## Why "We Spend Far Too Much Time Handling Slop" Defines the Problem
Daniel Stenberg: "We spend far too much time handling slop due to findings that are not real, exaggerated, or misunderstood."
**The three types of slop:**
1. **Not real:** Claimed vulnerabilities that don't exist
2. **Exaggerated:** Minor issues inflated to sound critical
3. **Misunderstood:** Valid findings but incorrect analysis
**Why this matters:**
- All three types require maintainer time to evaluate
- All three types waste time that could be spent on actual development
- All three types create frustration and burnout
**The tipping point:** When time spent handling slop exceeds time spent on real work, the program must end.
### Users Spend Time Handling Chatbot Slop
Users face the same three types with chatbot responses:
1. **Not real:** Hallucinated features that don't exist ("Pro plan includes unlimited API calls" when it doesn't)
2. **Exaggerated:** Minor features inflated to sound comprehensive ("Full enterprise-grade security" when it's basic HTTPS)
3. **Misunderstood:** Real features but wrong explanation ("Click Settings to configure" when Settings menu doesn't contain that option)
**Why this matters:**
- All three types require user time to verify
- All three types waste time that could be spent actually using product
- All three types create frustration and abandonment
**The user's tipping point:** When time spent verifying chatbot exceeds time to just read the page, user ignores chatbot.
## The Verdict: Remove Programs That Create More Work Than Value
The cURL decision proves a hard truth: **Well-intentioned programs must be removed when overhead exceeds value.**
**Bug bounty calculation:**
- Value: Security improvements from valid reports
- Cost: Maintainer time reviewing AI-generated slop
- Ratio: Cost > Value
- Decision: Remove bounty
**Chatbot demo calculation:**
- Value: Users understanding product from AI responses
- Cost: User time verifying hallucinations and outdated info
- Ratio: Cost > Value
- Decision: Remove chatbot
**Voice AI alternative:**
- Value: Users understanding product from DOM-reading guidance
- Cost: Zero verification overhead (reading actual page)
- Ratio: Value >> Cost
- Decision: Use voice AI
## The Alternative: Keeping Programs That Don't Scale
Imagine if cURL decided to keep the bug bounty program:
**Option A: Keep bounties, keep reviewing slop**
- Maintainers burn out from 2-5 hours daily reviewing nonsense
- Project development slows down
- Real vulnerabilities take longer to fix
- Community morale decreases
**Option B: Keep bounties, add filtering system**
- Hire people to pre-filter reports
- Cost: Full-time salaries > bounty payouts
- Problem: Filters miss some good reports
- New problem: Training filters to recognize AI slop
**Option C: Keep bounties, require proof**
- Demand reproduction steps, proof-of-concept code
- Slop submitters copy/paste AI-generated proofs
- Same problem, slightly higher barrier
**Option D: Remove bounties (chosen solution)**
- Eliminates incentive for slop submission
- Preserves incentive for real researchers (fame)
- Regains maintainer time for actual development
- Sustainable long-term
### Demo Guidance Has Same Options
**Option A: Keep chatbots, keep hallucinating**
- Users frustrated verifying every response
- Adoption slows down
- Trust decreases
- Community shares negative experiences
**Option B: Keep chatbots, add verification system**
- Add "Was this helpful?" feedback buttons
- Cost: Users must rate every response
- Problem: Users don't know which responses are wrong
- New problem: Feedback doesn't prevent future hallucinations
**Option C: Keep chatbots, require citations**
- Demand chatbot cite sources for every claim
- Citations often hallucinated or wrong
- Same problem, slightly better formatting
**Option D: Use voice AI that reads DOM (better solution)**
- Eliminates hallucinations (reading actual content)
- Preserves useful guidance (describing page structure)
- Regains user trust (no verification needed)
- Sustainable long-term
## The Pattern: Automation Without Expertise Creates Overhead
**cURL insight:** AI enables report generation without security expertise, creating overhead for experts to filter.
**Demo insight:** LLMs enable response generation without page knowledge, creating overhead for users to verify.
**The pattern:**
1. Tool lowers barrier to output generation
2. Output volume increases
3. Output quality decreases
4. Experts must filter volume to find quality
5. Filtering overhead exceeds value of quality outputs
6. Program becomes unsustainable
**The solution:** Don't generate. Read reality instead.
**cURL solution:** Real researchers don't need bounties (fame is sufficient). Remove bounty to eliminate slop.
**Demo solution:** Users don't need generated responses (page already contains info). Read DOM to eliminate hallucinations.
---
*Demogod's voice AI reads your site's DOM directly—no generation, no hallucinations, no verification overhead. Like cURL removing bug bounties to stop the slop, voice AI removes response generation to stop the hallucinations. One line of code. Zero AI-generated nonsense. [Try it on your site](https://demogod.me).*
← Back to Blog
DEMOGOD