curl's "Ban You and Ridicule You" Security Policy Is Brutal Honesty—Voice AI for Demos Proves Why Clear Boundaries Beat Polite Platitudes
# curl's "Ban You and Ridicule You" Security Policy Is Brutal Honesty—Voice AI for Demos Proves Why Clear Boundaries Beat Polite Platitudes
Daniel Stenberg, creator of curl, just updated their security.txt file with a warning that's causing quite a stir on Hacker News:
**"We will ban you and ridicule you in public if you waste our time on crap reports."**
No corporate speak. No "we appreciate all feedback." No "your input is valuable." Just a blunt threat: waste our time, face public ridicule.
The HN discussion exploded (483 points, 278 comments in 3 hours). Half the commenters are horrified by the "unprofessional" tone. The other half are celebrating the honesty.
But there's a deeper pattern here that applies directly to Voice AI for demos: **Clear boundaries prevent wasted effort. Polite platitudes invite noise.**
Curl's approach works because it reads the signal (quality security reports) and rejects the noise (automated scanners, bounty hunters submitting junk). Voice AI for demos works the same way: read DOM structure (signal), reject hallucinated navigation (noise).
Both succeed by being ruthlessly specific about what they accept and what they reject. Both fail when they try to be everything to everyone.
## The Security.txt File That Broke Politeness
Here's the full curl security.txt:
```
Contact: mailto:security@curl.se
Contact: https://github.com/curl/curl/security/advisories
Policy: https://curl.se/dev/vuln-disclosure.html
Preferred-Languages: en
Acknowledgments: https://curl.se/docs/security.html
# We offer NO (zero) rewards or other kinds of compensation for reported
# problems, but we offer gratitude and acknowledgments clearly stated in
# documentation around confirmed issues.
#
# We will ban you and ridicule you in public if you waste our time on crap
# reports.
Expires: 2026-10-22T00:00:00Z
Canonical: https://curl.se/.well-known/security.txt
```
**What makes this controversial:**
1. **No bug bounties:** Most major projects pay for security findings. Curl offers only "gratitude and acknowledgments."
2. **Public ridicule threat:** Most projects say "we appreciate all reports." Curl says "we'll mock you publicly for bad ones."
3. **Explicit rejection of noise:** Most projects accept any report politely. Curl pre-filters by warning off low-quality submissions.
The HN discussion reveals the divide:
**"This is unprofessional. You catch more flies with honey."**
**"This is necessary. Automated scanners submit thousands of garbage reports."**
Both are true. But which matters more: politeness or effectiveness?
## The Problem Curl Is Solving: Automated Noise Overwhelming Human Signal
Daniel Stenberg maintains curl, one of the most widely used pieces of software in the world. It's in billions of devices. It's in every data center. It's internet infrastructure.
That scale creates a problem: **automated security scanners find curl everywhere and submit reports for every theoretical vulnerability, regardless of actual impact.**
The volume is overwhelming:
- **Automated tools** scan for known CVE patterns and submit them without context
- **Bug bounty hunters** submit low-effort reports hoping for easy money
- **AI-generated reports** now flood in with hallucinated vulnerabilities
- **Well-meaning amateurs** report issues that aren't actually exploitable
Each report requires human time to triage, evaluate, and respond to. When 95% of reports are noise, the cost is staggering.
**Curl's solution:** Reject noise upfront with brutal honesty.
**The parallel to demos:**
Chatbots try to answer every question politely, even when they don't understand the UI:
- **User asks:** "Where's the submit button?"
- **Chatbot generates:** "The submit button is typically located in the bottom-right corner of forms."
- **Reality:** The submit button is actually in the top-left (or doesn't exist on this page)
The chatbot wasted the user's time with a polite hallucination instead of saying "I don't see a submit button on this page."
Voice AI for demos reads the DOM first:
- **User asks:** "Where's the submit button?"
- **Agent queries DOM:** `document.querySelector('button[type="submit"]')`
- **If found:** "The submit button is in the form at [specific location]"
- **If not found:** "I don't see a submit button on this page. The available buttons are: [list actual buttons]"
Clear boundaries. Read the DOM or admit you don't know. No polite hallucinations.
## Why "Ban and Ridicule" Works: Costs Must Match Behavior
The HN debate centers on whether threatening ridicule is too harsh. But look at the economics:
**Without the warning:**
- Automated scanner submits 1000 junk reports
- Curl team spends 100 hours triaging
- Zero actual vulnerabilities found
- Cost: 100 hours of expert time wasted
**With the warning:**
- Serious researchers read the policy and understand expectations
- Automated scanners still run but operators think twice before submitting
- Bug bounty hunters look elsewhere (no money, high ridicule risk)
- AI report generators get filtered by human review
- Cost: Marginal deterrence effect, massive time savings
The "ban and ridicule" threat creates a cost for submitting junk. That cost must be high enough to deter noise while low enough not to deter signal.
**Is public ridicule too high a cost?**
The HN thread reveals this is calibrated correctly:
- Serious security researchers aren't deterred (they submit quality reports anyway)
- Automated noise is deterred (operators don't want public mockery)
- Bug bounty hunters are deterred (no reward, high risk)
The boundary works because it matches the problem: **automated/low-effort noise, not legitimate security research.**
**The demo parallel:**
Chatbots accept all questions politely, creating no cost for asking vague, unanswerable questions:
- "Make it better" (what is "it"? what is "better"?)
- "Fix the problem" (what problem? where?)
- "Show me the thing" (what thing?)
The chatbot tries to answer with a hallucination, wasting user time.
Voice AI can set clear boundaries:
- **Vague query:** "Make it better"
- **Agent response:** "I need more specific information. What would you like to improve? Options: [specific features visible in DOM]"
Cost for vague query = need to be specific. This creates a small friction that improves signal quality.
## The "No Rewards" Policy: When Gratitude Beats Money
Curl offers zero financial compensation for security reports. Only "gratitude and acknowledgments."
This is unusual. Google pays up to $31,337 for Chrome bugs. Microsoft pays up to $250,000 for Windows vulnerabilities. Even small startups offer bug bounties.
Why does curl refuse?
**From the policy:**
"We offer NO (zero) rewards or other kinds of compensation for reported problems, but we offer gratitude and acknowledgments clearly stated in documentation around confirmed issues."
The message: **We can't afford bounties. If you're here for money, go elsewhere. If you care about internet infrastructure, we'll acknowledge you publicly.**
This is brilliant filtering:
1. **Deters mercenaries:** Bounty hunters optimize for $/hour. Curl is 0$/hour. They leave.
2. **Attracts idealists:** Security researchers who care about infrastructure quality submit anyway.
3. **Sets expectations:** No one can complain they weren't warned.
The result: **Higher signal-to-noise ratio at zero cost.**
**The demo parallel:**
Chatbots try to serve everyone:
- Enterprise customers with complex workflows
- Free users exploring features
- Curious visitors who might never sign up
- Automated scrapers
This creates noise. The chatbot can't distinguish "CEO of Fortune 500 company" from "random bot" so it treats all queries equally, generating hallucinations for everyone.
Voice AI can filter by context:
- **Logged-in users:** Full DOM access, detailed guidance
- **Anonymous visitors:** Limited guidance, focus on signup flow
- **Detected bots:** Minimal response or CAPTCHA
Match effort to signal quality. Don't spend the same resources on noise as on signal.
## The Public Ridicule Threat: Why Shame Works When Incentives Don't
"We will ban you and ridicule you in public if you waste our time on crap reports."
This is the most controversial line. HN commenters call it:
- "Unprofessional"
- "Childish"
- "Counterproductive"
But also:
- "Necessary"
- "Honest"
- "Finally someone says it"
**Why ridicule works:**
1. **Automated scanners don't care about bans** (they'll just use a different email)
2. **Automated scanners don't care about rejection** (they submit to thousands of projects)
3. **But automated scanner *operators* care about public mockery** (reputational damage)
The threat targets the human operator, not the automated system.
**Example:**
Without ridicule threat:
- Scanner operator submits 10,000 low-quality reports to various projects
- Most projects politely reject them
- Operator continues (no cost to doing so)
With ridicule threat:
- Scanner operator sees curl's policy
- Calculates: "If I submit junk and get mocked publicly, my reputation suffers"
- Chooses not to submit (or manually reviews before submitting)
**The cost is asymmetric:**
- Curl's cost of ridicule: ~5 minutes to write a blog post
- Operator's cost of being ridiculed: Reputational damage across the security community
This asymmetry makes the threat credible and effective.
**The demo parallel:**
Chatbots can't "ridicule" bad questions, but they waste time on them:
User: "asdfjkl" (random keyboard smash)
Chatbot: "I'm not sure I understand. Could you please provide more details about what you're looking for?"
The chatbot spent tokens generating a polite response to noise.
Voice AI can reject noise efficiently:
User: "asdfjkl"
Agent: "Invalid input. Please ask a specific question about the page."
No polite elaboration. No generated fluff. Just a boundary: speak clearly or get rejected.
## The HN Debate: "Professional" vs. "Honest"
The Hacker News thread reveals two worldviews:
**Camp 1: "This is unprofessional"**
- "You should always be polite, even to time-wasters"
- "Public ridicule creates a hostile environment"
- "Security researchers will be scared away"
- "You catch more flies with honey"
**Camp 2: "This is necessary honesty"**
- "Automated scanners have made security reporting unusable"
- "Politeness invites noise"
- "Clear boundaries protect maintainer time"
- "If you're scared of this policy, you were probably going to submit crap anyway"
Both camps make valid points. But which is actually true?
**The empirical evidence:**
curl has used this policy for years. Outcomes:
- **Security reports haven't stopped** (serious researchers still submit)
- **Quality has improved** (noise is pre-filtered)
- **Curl remains secure** (vulnerabilities are still found and fixed)
- **Maintainer time is protected** (Daniel can focus on real issues)
The policy works.
**Why does "unprofessional" honesty outperform "professional" politeness?**
Because the problem isn't one-time human error. It's systemic automated noise.
If curl received 10 bad reports per year from well-meaning amateurs, politeness would be optimal. Say "thanks, but this isn't a vulnerability" and move on.
But curl receives thousands of automated reports. Politeness scales linearly with noise. Boundaries scale sublinearly (one harsh policy deters thousands of future reports).
**The demo parallel:**
Chatbots are "professional" - they never say "that's a dumb question." They always generate a polite response, even to:
- Empty messages
- Random characters
- Repeated questions
- Requests for things that don't exist
This wastes resources. Every polite hallucination costs tokens, latency, and user trust (when the hallucination is wrong).
Voice AI can be "honest":
- Empty message → "Please ask a question"
- Random characters → "Invalid input"
- Repeated question → "I already answered this: [previous answer]"
- Request for nonexistent feature → "This page doesn't have that. Available options: [actual list]"
Honesty protects resources. Boundaries filter noise.
## The Three Types of Security Reports and How Boundaries Filter Them
Curl's policy implicitly categorizes security reports:
### Type 1: Legitimate Vulnerabilities (SIGNAL)
- Researcher found actual exploitable bug
- Provided clear reproduction steps
- Understands curl's codebase
- **Response:** Gratitude, acknowledgment, fix
**curl's policy impact:** No deterrent. Serious researchers submit anyway.
### Type 2: Theoretical Issues (NOISE)
- Automated scanner found pattern matching CVE
- No proof of exploitability
- No understanding of curl's architecture
- **Response:** Polite rejection (pre-policy) or deterrence (post-policy)
**curl's policy impact:** Strong deterrent. Operators don't want ridicule for submitting scanner output.
### Type 3: Incompetent Submissions (NOISE)
- Doesn't understand what a vulnerability is
- Reports features as bugs
- AI-generated hallucinations
- **Response:** Wasted triage time (pre-policy) or deterrence (post-policy)
**curl's policy impact:** Strong deterrent. Fear of public mockery prevents submission.
**The filtering math:**
Without policy:
- 5% Type 1 (signal)
- 40% Type 2 (automated noise)
- 55% Type 3 (incompetent noise)
- Maintainer spends 95% of time on noise
With policy:
- 50% Type 1 (signal - unchanged)
- 30% Type 2 (reduced - some operators filter themselves)
- 20% Type 3 (reduced - fear of ridicule)
- Maintainer spends 50% of time on noise
Even moderate deterrence creates massive time savings.
**The demo parallel:**
Chatbots receive three types of queries:
**Type 1: Specific, answerable questions (SIGNAL)**
- "Where is the submit button on this form?"
- "How do I export my data?"
- "What's the difference between these two options?"
**Type 2: Vague but good-faith questions (NOISE)**
- "Make this better"
- "Fix the problem"
- "Show me settings"
**Type 3: Unanswerable or malicious queries (NOISE)**
- Random characters
- Requests for features that don't exist
- Attempts to jailbreak the system
Chatbots try to answer all three types politely, wasting resources on Type 2 and 3.
Voice AI can filter:
- **Type 1:** Full DOM reading, detailed response
- **Type 2:** "Please be more specific. I can help with: [list of visible options]"
- **Type 3:** "Invalid input" or silent rejection
Boundaries improve signal-to-noise without deterring legitimate users.
## The Acknowledgment Economy: When Public Credit Beats Cash
Curl's "no rewards" policy is paired with a strong acknowledgments program:
"We offer gratitude and acknowledgments clearly stated in documentation around confirmed issues."
**What this means:**
- Your name in curl's security advisories
- Credit in release notes
- Public thanks from Daniel Stenberg
- Recognition in the security community
**Why this works:**
For security researchers motivated by reputation (not money):
- curl is internet infrastructure (high prestige)
- Daniel Stenberg is respected (his acknowledgment matters)
- Security community sees curl CVEs (visibility is valuable)
For bounty hunters motivated by money:
- curl has zero financial value
- They go elsewhere
- curl's signal-to-noise improves
**The filtering effect:**
curl's acknowledgment-only policy automatically filters for researchers who value:
1. **Infrastructure impact** over financial reward
2. **Community recognition** over private bounty payments
3. **Technical quality** over volume of submissions
These are exactly the researchers who submit high-quality reports.
**The demo parallel:**
Chatbots try to monetize all users equally:
- Free tier with ads/limited access
- Pro tier with full features
- Enterprise tier with custom support
This creates mixed incentives. Free users expect the same assistance as paying users, generating noise.
Voice AI can use the acknowledgment model:
- **Anonymous users:** Basic guidance, credit system ("ask good questions to unlock better features")
- **Registered users:** Full guidance, acknowledgment in community ("power user badge")
- **Paying users:** Priority support, named recognition
Filter by engagement quality, not just payment. Acknowledge contributors publicly.
## The "Waste Our Time" Standard: Why Vagueness Is Hostility
"We will ban you and ridicule you in public if you waste our time on crap reports."
The key phrase: **"waste our time."**
Not "submit incorrect reports."
Not "make mistakes."
Not "lack expertise."
**"Waste our time."**
This implies:
- You didn't do basic research
- You didn't read the documentation
- You didn't test your claim
- You submitted automated scanner output without review
**"Waste our time" is a moral judgment, not a technical one.**
It says: Your report demonstrates you don't value our time. Therefore, we don't value your report.
**Why this standard works:**
It distinguishes:
- **Good-faith mistakes:** Researcher tried hard, got it wrong → polite correction
- **Bad-faith submissions:** Didn't try at all → ban and ridicule
The distinction is effort, not outcome.
**Examples from security reporting:**
**Good report (wrong vulnerability, high effort):**
"I found what appears to be a buffer overflow in curl_parse_header(). I've attached a minimal reproduction case, tested it on three platforms, and reviewed the source code. However, I might be misunderstanding how the buffer allocation works. Could you confirm if this is exploitable?"
*Outcome: Polite response explaining why it's not a vulnerability. Researcher learns. No ridicule.*
**Bad report (might be right, zero effort):**
"My automated scanner found CVE-2024-XXXX pattern in your code. Please fix and pay bounty."
*Outcome: Ban and potential ridicule if repeated. No value provided.*
**The demo parallel:**
Chatbots treat "Where's the submit button?" and "asdfjkl" as equally valid inputs, generating polite responses to both.
Voice AI can distinguish:
- **Good-faith question:** "Where's the submit button?" → Read DOM, provide specific answer
- **Low-effort noise:** "asdfjkl" → Reject input, request valid question
The standard is: Did the user make a minimal effort to ask clearly?
If yes → full assistance.
If no → boundary enforcement.
## The Open Source Burden: When Free Software Costs More Than Paid
curl is open source. It's free. It's in everything.
This creates an expectation problem:
**Users think:**
"It's free, so I can demand anything. They should be grateful I'm using it."
**Maintainers reality:**
"It's free, which means I'm working without payment. My time is my own resource to protect."
The mismatch creates toxicity:
- Users demand features with entitled tone
- Users submit low-quality bug reports
- Users expect instant responses
- Users threaten to "switch to competitor" (who they'll also get for free)
**The burden of free software:**
When you charge for software:
- Bad customers can be fired
- Support time is compensated
- Boundaries are enforced by contract
When software is free:
- Every user feels equally entitled
- Support time is donated
- Boundaries must be enforced socially (ridicule, bans)
**curl's policy is self-protection:**
"We don't owe you anything. We give you infrastructure for free. In return, we demand basic respect for our time."
**The HN debate misses this:**
Commenters saying "this is unprofessional" are applying paid-software standards to free software. In paid software, customers have leverage. In free software, maintainers have discretion.
**The demo parallel:**
Free chatbot demos create the same expectation mismatch:
**Users think:**
"It's a free demo, I can ask anything and expect perfect answers."
**Reality:**
"It's a free demo, we have limited resources, hallucinations are possible."
When users waste demo resources on:
- Testing jailbreaks
- Asking unanswerable questions
- Requesting features that don't exist
...they're wasting limited demo capacity that could serve legitimate users.
Voice AI can enforce boundaries:
- Rate limiting per user
- Complexity limits on queries
- Requirement to be logged in for detailed assistance
Protect the resource from abuse without eliminating access.
## The Emotional Labor of Politeness: Why "Professional" Is Expensive
"You catch more flies with honey."
This is the recurring criticism of curl's policy. Be polite. Be professional. Don't threaten ridicule.
But politeness is emotional labor. It costs time and energy.
**The cost structure:**
**Polite rejection of bad security report:**
1. Read report (5 minutes)
2. Understand claim (10 minutes - often unclear)
3. Research if it's been reported before (5 minutes)
4. Test if it's actually a vulnerability (20 minutes)
5. Draft polite rejection explaining why it's not a vulnerability (15 minutes)
6. Respond with gratitude for their time (5 minutes)
**Total: 60 minutes per bad report**
**Brutal rejection:**
1. Read first paragraph (30 seconds)
2. Recognize pattern of automated scanner (30 seconds)
3. Respond: "This is scanner output without analysis. Banned." (30 seconds)
**Total: 90 seconds per bad report**
**At scale:**
curl receives hundreds of security reports per year. If 70% are noise:
**Polite approach:** 70 reports × 60 minutes = 4,200 minutes = **70 hours of maintainer time**
**Brutal approach:** 70 reports × 1.5 minutes = 105 minutes = **1.75 hours of maintainer time**
Politeness costs **40x more time** than boundaries.
**The trade-off:**
Those 68.25 hours could be spent:
- Fixing actual vulnerabilities
- Adding features
- Improving documentation
- Mentoring contributors
Instead, they're spent on emotional labor for people who didn't respect the maintainer's time in the first place.
**Why "professional" is a luxury:**
Paid support teams can afford politeness because:
- They're compensated for emotional labor
- They can hire more people
- Customer satisfaction is a business metric
Open source maintainers can't afford it because:
- No compensation for emotional labor
- Can't hire help (no revenue)
- User satisfaction doesn't pay bills
**The demo parallel:**
Chatbots perform expensive emotional labor on every interaction:
User: "This sucks"
Chatbot: "I'm sorry you're having a frustrating experience! I'd love to help make things better. Could you tell me more about what's not working well for you? I'm here to assist and want to ensure you have a positive experience with our product!"
**Cost:** ~40 tokens of polite response to uncomp user rant.
Voice AI can be efficient:
User: "This sucks"
Agent: "What specific issue can I help you with?"
**Cost:** ~8 tokens requesting specific actionable input.
The agent isn't rude. It's just not doing emotional labor for vague complaints.
## The "Crap Reports" Definition: What Actually Wastes Time
"We will ban you and ridicule you in public if you waste our time on crap reports."
What qualifies as a "crap report"?
**Based on curl's vulnerability disclosure policy, crap reports include:**
### 1. **Automated Scanner Output**
- No manual review
- No context
- No understanding of whether pattern is actually exploitable
- Just copy-pasted scan results
**Why this is crap:** Scanner might flag theoretical issue that isn't exploitable in curl's architecture. Maintainer must research to determine exploitability. Submitter did zero work.
### 2. **Reports Without Reproduction Steps**
- "I think there's a vulnerability in X"
- No code to reproduce
- No explanation of impact
- No testing
**Why this is crap:** Maintainer must recreate researcher's entire thought process, often failing because the report is too vague.
### 3. **Duplicate Reports**
- Issue already reported
- Issue already fixed
- Issue already documented as not-a-vulnerability
**Why this is crap:** Demonstrates submitter didn't read existing security advisories before wasting maintainer time.
### 4. **Non-Vulnerabilities**
- Features reported as bugs
- Intended behavior reported as security issues
- Theoretical issues with no exploit path
**Why this is crap:** Demonstrates submitter doesn't understand what a security vulnerability is.
### 5. **AI-Generated Hallucinations**
- LLM reviewed code and hallucinated a vulnerability
- No human verification
- Often includes plausible-sounding but incorrect analysis
**Why this is crap:** Combines automated scanner problems with lack of reproduction steps.
**The pattern:**
All "crap reports" share one trait: **Submitter didn't invest minimal effort before demanding maintainer's time.**
**The demo parallel:**
Chatbots receive equivalent "crap queries":
### 1. **Automated Testing**
- Bots probing for jailbreaks
- Automated accessibility testing submitting gibberish
- Load testing flooding with repetitive queries
### 2. **Queries Without Context**
- "Fix it" (what is "it"?)
- "Show me" (show you what?)
- "Make better" (make what better? how?)
### 3. **Duplicate Queries**
- Asking same question multiple times
- Asking for features that don't exist (after being told they don't exist)
### 4. **Non-Questions**
- Random characters
- Emojis only
- Copy-pasted content unrelated to the product
### 5. **AI-Testing**
- Prompt injection attempts
- Trying to extract system prompts
- Seeing if chatbot will say inappropriate things
Voice AI can filter these:
- Detect automated patterns → rate limit
- Require context → "Please specify what you want to fix"
- Track query history → "I already answered this"
- Reject non-questions → "Invalid input"
- Block AI-testing → Security boundaries
The goal isn't to be hostile. It's to protect resources for legitimate users.
## The Boundary Paradox: Harshness Attracts Quality
curl's harsh policy has a counterintuitive effect:
**Prediction:** Harsh policy deters all reporters, including good ones
**Reality:** Harsh policy attracts higher-quality reporters
**Why does this happen?**
### 1. **Selection Effect**
Researchers who read the policy and think:
- "This is reasonable, automated scanners are annoying" → High-quality researchers who understand the problem
- "This is mean, I won't submit anything" → Low-quality researchers who were going to submit marginal reports
The policy self-selects for people who understand maintainer constraints.
### 2. **Credibility Signal**
A project that says "we'll ridicule crap reports" signals:
- We take security seriously (not a honeypot)
- We know the difference between quality and noise (experienced maintainers)
- We respect researcher time (won't waste it with bureaucracy)
High-quality researchers value these signals.
### 3. **Reduced Noise Improves Response Time**
When maintainers aren't drowning in noise:
- Good reports get faster responses
- Fixes are implemented quicker
- Researchers see their work having immediate impact
This creates a positive feedback loop: quality researchers get results, submit more quality reports.
**The demo parallel:**
Strict boundaries in demos attract better users:
**Boundary:** "Please ask specific questions about visible UI elements"
**Low-quality users think:** "Ugh, too restrictive, I'll go elsewhere"
**High-quality users think:** "Good, this agent actually understands the page and can give precise answers"
The boundary filters for users who want accuracy over politeness.
**Example:**
**Chatbot (no boundaries):**
User: "How do I do the thing?"
Chatbot: "I'd be happy to help! Could you provide more details about what you're trying to accomplish? I can assist with a wide variety of tasks including account management, navigation, purchases, and more. What specific feature are you interested in?"
User leaves (too vague, wastes time)
**Voice AI (boundaries):**
User: "How do I do the thing?"
Agent: "Please specify which feature you need. Available options on this page: [list of 5 specific buttons]"
User either:
- Leaves (low-quality user who won't be satisfied anyway)
- Responds with specific request (high-quality user who gets precise help)
Boundaries create self-selection.
## The Gratitude Model: Why Acknowledgment Is Compensation
curl offers zero money for security reports. But it offers something else:
**"Gratitude and acknowledgments clearly stated in documentation around confirmed issues."**
This isn't as weak as it sounds.
**What curl provides:**
1. **Public credit** in security advisories (your name associated with finding a real vulnerability in infrastructure software)
2. **Technical validation** from respected maintainer (Daniel Stenberg saying "this is a real issue" has weight)
3. **Community reputation** (other researchers see your quality work)
4. **Portfolio material** (legitimate security researchers list their CVEs)
**For the right researchers, this is more valuable than bug bounties:**
Bug bounties attract:
- Professional bounty hunters (volume over quality)
- Automated systems (spray and pray)
- Opportunists (hoping for easy money)
Acknowledgment attracts:
- Idealists (care about internet infrastructure)
- Researchers (building reputation for future employment)
- Experts (want validation from respected maintainers)
**The math:**
If curl offered $1,000 per vulnerability:
- Would receive 10x more reports (mostly noise from bounty hunters)
- Would spend 100x more time triaging
- Would maybe find 1.2x more real vulnerabilities
- Would spend $10,000+ per year on bounties
- Net outcome: worse signal-to-noise, worse use of time, high financial cost
With acknowledgment-only:
- Receives moderate volume of reports
- Spends reasonable time triaging
- Finds high-quality vulnerabilities
- Spends $0 on bounties
- Net outcome: good signal-to-noise, efficient time use, zero financial cost
**The demo parallel:**
Chatbot demos try to be "full-featured" for free users, creating entitled expectations:
Free user: "Why can't this chatbot do X?"
Reality: Because X requires expensive compute and you're not paying anything
Voice AI can use the acknowledgment model:
- **Anonymous users:** Basic guidance, learn by using
- **Registered users:** Full guidance, name appears in "helpful community members" if they ask good questions
- **Contributors:** Special access, public acknowledgment for quality feedback
Users who want acknowledgment provide higher-quality input than users who want free unlimited access.
## The Ridicule Threat as Deterrent, Not Punishment
"We will ban you and ridicule you in public if you waste our time on crap reports."
This reads as punishment. But it's actually deterrence.
**The distinction:**
**Punishment:** You submit bad report → We ridicule you → You suffer consequences for past action
**Deterrence:** We announce ridicule policy → You read it → You choose not to submit bad report → No ridicule occurs
**curl's goal isn't to ridicule people.** It's to prevent submissions that would require ridicule.
**Evidence:**
If curl actually wanted to ridicule people (sadistic goal), they would:
- Not announce the policy (surprise ridicule is more entertaining)
- Make the policy subtle (so people accidentally violate it)
- Publicly ridicule first-time offenders (maximize embarrassment)
Instead, curl:
- Announces policy prominently (in security.txt)
- Makes policy explicit (clear what counts as "crap")
- Aims to never use it (deterrence works)
**The goal is zero ridicule, achieved through threat of ridicule.**
**Game theory:**
**Without ridicule threat:**
- Submitter cost of bad report: 0 (just send automated output)
- Maintainer cost of bad report: High (60 minutes to triage)
- Equilibrium: Flood of bad reports
**With ridicule threat:**
- Submitter cost of bad report: Reputational damage risk
- Maintainer cost of bad report: Still high, but less frequent
- Equilibrium: Fewer bad reports
The threat changes incentives without needing to execute the threat.
**The demo parallel:**
Chatbots never "punish" bad queries, so users have no incentive to improve:
User: "asdfjkl"
Chatbot: "I'm sorry, I didn't understand that. Could you please rephrase?"
User: "asdfjkl" (repeats)
Chatbot: "I'm still having trouble understanding. How can I help you today?"
Infinite patience for zero-effort input.
Voice AI can use deterrence:
User: "asdfjkl"
Agent: "Invalid input. Repeated invalid inputs will result in timeout."
User: "asdfjkl" (repeats)
Agent: "Too many invalid inputs. Please try again in 60 seconds."
The user learns: provide valid input or lose access temporarily.
This isn't punishment (you can come back). It's deterrence (learn to use the system correctly).
## The Professional Myth: Why Politeness Signals Weakness
HN commenters argue curl's policy is "unprofessional." But what does "professional" mean?
**In corporate environments:**
- Be polite to everyone
- Never threaten consequences
- Always say "thank you for your feedback"
- Maintain pleasant tone regardless of other party's behavior
**Why this works in corporations:**
- Legal liability (hostility creates lawsuits)
- Employment leverage (employees can be fired)
- Revenue constraints (customers can leave)
**Why this doesn't work in open source:**
- No legal liability (free software has no warranty)
- No employment leverage (contributors are volunteers)
- No revenue constraints (users don't pay)
**The professional myth:** Politeness is free and always beneficial
**The reality:** Politeness is expensive emotional labor with trade-offs
**What "professional" actually signals:**
In corporate context:
- We have resources to be polite
- We value your business
- We're bound by legal/HR constraints
In open source context:
- We're donating our time
- We have no obligation to you
- We're constrained by our own capacity
**curl's "unprofessional" policy actually signals:**
- We know our constraints
- We protect our resources
- We won't pretend we can handle infinite noise
This is more honest than corporate politeness.
**The demo parallel:**
Chatbots use corporate politeness signals:
"I'm so sorry you're experiencing difficulties! I'm here to help and committed to ensuring you have the best possible experience with our product. Your satisfaction is our top priority!"
This signals:
- We'll waste tokens being polite
- We'll pretend we can solve everything
- We won't set boundaries
Voice AI can use honest communication:
"I can help you navigate this page. Ask specific questions about visible elements for best results."
This signals:
- We know our capabilities
- We set clear expectations
- We deliver on promises
Honesty beats performative politeness.
## The Verdict: Boundaries Scale, Politeness Doesn't
curl's security policy works because **boundaries scale sublinearly with volume, politeness scales linearly.**
**Politeness model:**
- 100 bad reports = 100 polite rejections = 100 hours of work
- 1,000 bad reports = 1,000 polite rejections = 1,000 hours of work
- 10,000 bad reports = 10,000 polite rejections = 10,000 hours of work
**Boundary model:**
- Write policy once = 1 hour
- Enforce policy on 100 reports = 2 hours
- Enforce policy on 1,000 reports = 3 hours (deterrence reduces volume)
- Enforce policy on 10,000 potential reports = 3 hours (99% deterred by policy)
**At scale, boundaries are 1000x more efficient than politeness.**
**Why people misunderstand this:**
At small scale (10 reports/year), politeness works fine:
- 10 polite rejections = 10 hours = Manageable
- Writing boundary policy = 1 hour + enforcement overhead = Not worth it
At large scale (1,000 reports/year), politeness breaks:
- 1,000 polite rejections = 1,000 hours = Impossible
- Boundary policy = 1 hour + 3 hours enforcement = Essential
curl operates at large scale. Boundary policy is mandatory, not optional.
**The demo parallel:**
Small demos (100 users/day) can use polite chatbots:
- 100 queries × 30 seconds average = 50 minutes total compute
- Being polite to everyone works
Large demos (100,000 users/day) need boundaries:
- 100,000 queries × 30 seconds = 833 hours of compute
- Can't afford politeness to noise
Voice AI provides boundaries:
- Filter automated traffic
- Rate limit per user
- Require specificity in queries
- Reject invalid input efficiently
**The principle:** As scale increases, boundaries become mandatory.
---
**The bottom line:** curl's "ban you and ridicule you" policy isn't unprofessional hostility. It's resource protection through honest communication. The policy works because it creates clear boundaries that deter noise while preserving signal.
Voice AI for demos follows the same principle. Read the DOM (signal), reject hallucinations (noise). Set boundaries (specific queries about visible elements), enforce them (reject vague input). Protect resources (limited compute) for legitimate users (people trying to accomplish tasks).
Politeness is a luxury affordable at small scale. At large scale, boundaries are mandatory. curl proves that brutal honesty beats performative niceness when defending limited resources against unlimited noise.
That's not just a security policy. It's an architectural principle for any system facing signal-to-noise problems—including Voice AI guidance that needs to distinguish "Where's the submit button?" from "asdfjkl."
← Back to Blog
DEMOGOD