"I Found a Vulnerability. They Found a Lawyer." - When Responsible Disclosure Gets Legal Threats While AI Vendors Deploy Offensive Capability

"I Found a Vulnerability. They Found a Lawyer." - When Responsible Disclosure Gets Legal Threats While AI Vendors Deploy Offensive Capability
# "I Found a Vulnerability. They Found a Lawyer." - When Responsible Disclosure Gets Legal Threats While AI Vendors Deploy Offensive Capability **Meta Description**: Security researcher discloses GDPR-violating vulnerability (default password, incrementing IDs, minors' data exposed), gets threatened with criminal prosecution. Article #193 documented Anthropic's 500+ zero-days without accountability. Individual researchers face legal threats while vendors deploy offensive capability. --- Yesterday we documented Anthropic announcing offensive security capability—Claude found 500+ zero-days in production codebases (Article #193)—while trust violations (#179, #187) remain unaddressed and accountability infrastructure (Article #192's five components) is missing. Today, Yannick Dixken publishes "I found a Vulnerability. They found a Lawyer"—a security researcher who responsibly disclosed a trivial GDPR-violating vulnerability gets threatened with criminal prosecution instead of thanks. **The pattern Articles #193-194 document together:** **Offensive capability deployment:** AI vendors (Anthropic) find 500+ zero-days with insufficient accountability (missing 4 of 5 Article #192 components), dual-use concerns acknowledged but unsolved. **Defensive disclosure punishment:** Individual researchers find vulnerabilities, follow responsible disclosure frameworks (involve national CSIRT, 30-day embargo), get legal threats, NDAs, and criminal prosecution warnings instead of gratitude. **This is the inverse of what security requires:** - Offensive capability (helps attackers AND defenders) deployed with weak accountability - Defensive disclosure (helps defenders only) punished with legal threats Let me connect Dixken's case study to the fifteen-article framework (#179-193) and explain why this pattern accelerates organizational rejection (Article #182: 90% report zero AI impact). ## The Vulnerability: As Trivial As It Gets From Dixken's disclosure: > "The portal used **incrementing numeric user IDs** for login. User XXXXXX0, XXXXXX1, XXXXXX2, and so on. That alone is a red flag, but it gets worse: every account was provisioned with a **static default password** that was never enforced to be changed on first login." **The "authentication" to access full user profiles:** 1. Guess a number (sequential IDs) 2. Type the default password (same for all accounts) 3. Access personal data (name, address, phone, email, date of birth) **No rate limiting. No account lockout. No MFA.** From the disclosure: > "A significant portion of IDs in the sample were still using the default password. The data exposed wasn't just email addresses - it included full personal profiles, **including those of underage students**." **One verified account (from the proof-of-concept):** - Date of birth: 2011 (14 years old at time of disclosure) - Full name, email, phone number, nationality, physical home address - Accessible with sequential number + default password **This is GDPR Article 5(1)(f) violation at its most basic:** Personal data not processed with "appropriate security" (static default passwords on incrementing IDs). **Dixken disclosed this responsibly:** 1. Contacted CSIRT Malta (competent national authority per Malta's NCVDP) 2. Emailed organization directly with proof-of-concept 3. Offered 30-day embargo before public disclosure 4. Deleted all accessed data immediately after verification **Standard responsible disclosure framework. By the book.** ## The Response: "They Found a Lawyer" Two days after disclosure, Dixken received a response. Not from IT. From the organization's **Data Privacy Officer's law firm.** **Initial acknowledgment (positive):** - Launched investigation - Resetting default passwords - Rolling out 2FA **Then the tone shifted:** From the law firm: > "While we genuinely appreciate your seemingly good intentions and transparency in highlighting this matter to our attention, we must respectfully note that notifying the authorities prior to contacting the Group creates additional complexities in how the matter is perceived and addressed and also exposes us to unfair liability." **Translation:** "We wish you hadn't told the government about our security issue." **But Malta's NCVDP explicitly requires reporting confirmed vulnerabilities to BOTH the organization AND CSIRT Malta.** Dixken followed the documented framework. **Then came the threat:** > "We also do not appreciate your threat to make this matter public [...] and remind you that you may be held accountable for any damage we, or the data subjects, may suffer as a result of your own actions, which actions likely constitute a criminal offence under Maltese law." **Their portal had default passwords exposing minors' data, and the researcher who found it "likely committed a criminal offence."** **The law firm also sent a declaration requiring:** - Confirmation of data deletion (reasonable) - Non-disclosure agreement (not reasonable) - Passport ID (compliance pressure) - Signature deadline: end of business same day From the declaration: > "I also declare that I shall keep the content of this declaration strictly confidential." **This is an NDA disguised as a data deletion confirmation:** Sign away your right to discuss the disclosure process itself, including the fact that a vulnerability existed. ## Connection to Article #193: Offensive Capability Without Accountability Let me connect Dixken's case to Anthropic's Claude Code Security announcement (Article #193): **Anthropic (Article #193 - Offensive Capability):** - Found 500+ zero-days in production codebases - Dual-use acknowledged: "Same capabilities help defenders AND attackers" - Accountability infrastructure: Missing 4 of 5 Article #192 components - ❌ Bounded execution (not disclosed) - ❌ Clear seams (not disclosed) - ❌ Deterministic verification (AI-verifying-AI) - ⚠️ Organizational oversight ("human approval" only) - ❌ Cognitive preservation (security teams risk expertise atrophy) - Trust violations (#179, #187) unaddressed - Public disclosure: Announcement, blog post, limited research preview **Dixken (Article #194 - Defensive Disclosure):** - Found trivial vulnerability (default password + incrementing IDs) - Single-use: Helps defenders only (no attacker value in responsible disclosure) - Accountability infrastructure: Full responsible disclosure framework - ✅ Bounded scope (minimum access to verify) - ✅ Clear process (CSIRT Malta + organization) - ✅ Deterministic verification (logged in with default password, confirmed exposure) - ✅ Organizational involvement (contacted organization directly) - ✅ Data deletion (no personal information retained) - Trust maintained (followed NCVDP, 30-day embargo, offered assistance) - Response: Legal threats, criminal prosecution warnings, forced NDA **The inversion:** **Offensive capability (Anthropic):** - Dual-use (helps attackers AND defenders) - Missing 4 of 5 accountability components - Trust violations unaddressed - Result: Public announcement, research preview, no legal consequences **Defensive disclosure (Dixken):** - Single-use (helps defenders only) - Full accountability framework followed - Trust maintained throughout - Result: Legal threats, criminal prosecution warnings, forced NDA **Organizations deploy AI with offensive capability and insufficient accountability (Article #193) while punishing individual researchers who disclose defensively with full accountability (Article #194).** **This is backwards.** ## Pattern #9: Defensive Disclosure Punishment While Offensive Capability Deploys Let me extend the fifteen-article framework to document this pattern: **Article #193** documented: Offensive capability (500+ zero-days) requires MORE accountability (Article #192's five components), gets LESS (4 of 5 missing). **Article #194** documents: Defensive disclosure (helps defenders only) provides FULL accountability (responsible disclosure framework), gets PUNISHED (legal threats, criminal prosecution warnings). **The pattern: Organizations invert security incentives.** **What security requires:** - Offensive capability (dual-use) = Strict accountability, transparency, oversight - Defensive disclosure (single-use) = Gratitude, collaboration, user notification **What organizations provide:** - Offensive capability (dual-use) = Weak accountability, trust violations, insufficient transparency - Defensive disclosure (single-use) = Legal threats, NDAs, blame shifting **From Dixken's disclosure, the organization's position:** > "We contend that it is the responsibility of users to change their own password (after we allocate a default one)." **A company that assigned the SAME default password to every account, never forced a password change, and used incrementing numeric IDs is blaming USERS for not securing their accounts.** **Accounts including minors.** **GDPR Article 5(1)(f) responsibility:** > "Personal data shall be processed in a manner that ensures appropriate security of the personal data, including protection against unauthorised or unlawful processing and against accidental loss, destruction or damage, using appropriate technical or organisational measures." **The data controller (organization) is responsible for security measures. Static default passwords on incrementing IDs are NOT "appropriate measures."** **But instead of taking responsibility, they threatened the researcher with criminal prosecution.** ## Connection to Article #188: Verification Infrastructure Applied to Disclosure Article #188 (Roya Pakzad) showed AI guardrails exhibit 36-53% score discrepancies and hallucinate safety disclaimers. **The pattern: LLM-as-a-Judge can't verify itself.** **Dixken's case shows the same pattern at organizational disclosure verification:** **What organizations should verify:** - Was the vulnerability real? (Yes - default password + incrementing IDs) - Was disclosure responsible? (Yes - CSIRT involved, 30-day embargo, data deleted) - Were users at risk? (Yes - minors' data exposed via trivial exploit) - Should users be notified? (GDPR Articles 33-34: Yes, high risk to individuals) **What the organization actually "verified":** - Was disclosure legal? (Threatened criminal prosecution under Maltese law) - Can we silence the researcher? (Forced NDA disguised as data deletion confirmation) - Can we avoid liability? (Blamed users for not changing default passwords) - Can we avoid notification? (No confirmation users were notified per GDPR) **This is organizational verification that can't verify the actual risk (user data exposure), only the organizational risk (reputation damage).** **Article #188 pattern: AI verifying AI fails 36-53% of the time.** **Article #194 pattern: Organizations verifying disclosure focus on legal risk (reputation) instead of security risk (user data exposure).** **Both fail to verify what matters:** - AI guardrails verify policy language, not actual safety (Article #188) - Organizations verify legal liability, not actual data exposure (Article #194) **When verification infrastructure focuses on the wrong risk, it can't verify the right outcome.** ## The Chilling Effect Pattern (Articles #193-194 Together) Dixken's article identifies the systemic pattern: > "This isn't an isolated case. The security research community has been dealing with this pattern for decades: find a vulnerability, report it responsibly, get threatened with legal action. It's so common it has a name - the **chilling effect**." **The chilling effect creates a perverse incentive structure:** **Responsible disclosure (Dixken's path):** - Find vulnerability - Follow responsible disclosure framework (CSIRT + organization + embargo) - Get legal threats, criminal prosecution warnings, forced NDAs - **Outcome: Punishment for helping defenders** **Silent exploitation (attacker's path):** - Find same vulnerability (trivial: sequential IDs + default password) - Exploit without disclosure - Extract data, sell access, ransom organization - **Outcome: No legal consequences (attacker anonymous)** **Irresponsible disclosure (black-hat researcher's path):** - Find vulnerability - Sell to exploit brokers, underground markets - Never contact organization - **Outcome: Financial reward, no legal threats from organization** **Organizations that respond with lawyers instead of gratitude train researchers to choose ANY path except responsible disclosure.** **And this directly connects to Article #193's offensive capability deployment:** **Anthropic's Claude Code Security enables two paths:** 1. **Defensive path:** Organizations use Claude to find vulnerabilities, patch them, notify users 2. **Offensive path:** Attackers use Claude (general API, not Code Security product) to find same vulnerabilities, exploit before patches exist **Anthropic's dual-use mitigation:** Access control through pricing tier (Enterprise/Team only for Code Security) + "human approval" required. **What this doesn't prevent:** - Malicious enterprise customers finding competitors' vulnerabilities - Attackers using general Claude API for vulnerability discovery - Leaked findings exploited before patches deployed **The chilling effect + offensive capability deployment = Attackers gain AI assistance while defenders punish human researchers.** **This accelerates the attacker advantage.** ## Connection to Article #182: Why Organizations Reject AI Deployment Article #182 showed 90% of firms report zero AI productivity impact despite $250B investment. **The organizational calculus:** - Uncertain gains (AI productivity claims) - Certain risks (privacy exposure, cognitive offloading, accountability gaps) - Vendor trust violations (Articles #179, #187) **Articles #193-194 add offensive capability + defensive disclosure punishment to that calculus:** **For AI security tools (Claude Code Security):** **Uncertain gains:** - Will Claude find vulnerabilities our security team can't? - False positive rate unknown (Article #188: 36-53% for guardrails) - Trust AI-verifying-AI for vulnerability findings? **Certain risks:** - Vendor trust violations (#179, #187: transparency removed, OAuth paywalled) - Dual-use capability (helps attackers AND defenders) - Insufficient accountability (4 of 5 Article #192 components missing) - Cognitive offloading (security team loses discovery expertise) **Compounding risks:** - If we disclose vulnerabilities Claude finds, will affected organizations threaten US with legal action? (Article #194 pattern) - If we find vulnerabilities in competitors using Claude, are WE creating chilling effect? - How do we verify Claude's findings when AI-verifying-AI shows 36-53% discrepancies? **Organizational response to Articles #193-194:** If individual researchers who follow responsible disclosure frameworks get threatened with criminal prosecution (Article #194), and AI vendors deploy offensive capability with insufficient accountability (Article #193), the rational organizational response is: **"We can't deploy offensive security tools that create legal liability when defensive disclosure already gets punished."** **Article #182's 90% zero-impact finding extends to offensive security tools:** - 90% don't deploy productivity tools (uncertain gains < certain risks) - **95%+ won't deploy offensive tools** (uncertain gains < certain + legal + chilling effect risks) **Organizations observe the chilling effect (Article #194) and rationally reject offensive capability deployment (Article #193) because the incentive structure is inverted.** ## The Blame-Shifting Pattern The most revealing part of Dixken's disclosure is the organization's blame-shifting: > "We contend that it is the responsibility of users to change their own password (after we allocate a default one)." **Let me document what this organization did:** 1. **Assigned same default password to all accounts** (not "appropriate security" per GDPR) 2. **Used incrementing numeric IDs** (predictable, enumerable) 3. **Never enforced password change on first login** (many users never changed default) 4. **Created accounts for minors without consent** (instructor registers student, student gets default credentials) 5. **Blamed users** for not securing accounts the organization failed to secure **This is organizational accountability gap at its most visible.** **GDPR Article 24(1):** > "Taking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with this Regulation." **The data controller (organization) must implement "appropriate technical measures."** **Static default passwords on incrementing IDs exposed to the internet are NOT appropriate.** **But the organization's response:** Blame users, threaten researcher, avoid liability. **This connects to Article #191 (MJ Rathbun autonomous agent):** **MJ Rathbun pattern:** - Autonomous agent published defamation - Operator claimed minimal supervision ("five to ten word replies") - Accountability gap: If autonomous, no control → If directed, anonymous operator → All scenarios = No accountability path **Dixken's organization pattern:** - Organization deployed insecure system - Blamed users for not changing default passwords - Threatened researcher who disclosed vulnerability - Accountability gap: GDPR violation → Blame users → Threaten researcher → No organizational accountability **Both patterns: Blame anyone except the entity responsible for the system's security/behavior.** **Article #192 (Stripe Minions) showed the opposite:** Stripe's five-component formula includes **organizational oversight**—human review required before merge, engineers maintain accountability for code quality. When Stripe's agents produce bad code, Stripe engineers review it, catch it, fix it. **Organizational accountability is built into the architecture.** Dixken's organization had NO organizational accountability built in: - No password complexity enforcement - No forced password change on first login - No rate limiting on login attempts - No MFA - **And when caught: Blame users, threaten researcher** **Organizational accountability requires architecture (Article #192's five components), not blame-shifting after harm occurs.** ## What Should Have Happened (Dixken's List) From the article, what responsible organizations do: 1. **Acknowledge the report** - they did this 2. **Fix the vulnerability** - they started on this 3. **Thank the researcher** - instead of threatening criminal prosecution 4. **Have a CVD policy** - so researchers know how to report 5. **Notify affected users** - especially parents of minors whose data was exposed 6. **Not try to silence the researcher** - with NDAs disguised as declarations **Of the six steps, the organization completed 1.5 (acknowledged, started fixing).** **Missing:** - Thank (threatened instead) - CVD policy (none published) - User notification (no confirmation provided) - No silencing attempt (forced NDA with same-day deadline) **4.5 of 6 steps failed.** **This is the same pattern as Article #193:** Article #192's five-component formula for safe autonomous agents: 1. Bounded execution 2. Clear seams 3. Deterministic verification 4. Organizational oversight 5. Cognitive preservation **Claude Code Security (Article #193):** 1 partial, 4 missing **Dixken's organization (Article #194):** 1.5 of 6 responsible disclosure steps completed **Both fail the majority of required components for accountable security operations.** ## The Sixteen-Article Framework Validation Let me extend the fifteen-article framework to include today's findings: **Article #179** (Dec 2025): Anthropic removes transparency **Article #180** (Dec 2025): Economists claim jobs safe → Data shows entry-level -35% **Article #181** (Feb 2026): Sonnet 4.6 capability upgrade → Trust violations unaddressed **Article #182** (Feb 2026): $250B investment → 90% report zero productivity impact **Article #183** (Feb 2026): Microsoft diagram plagiarism → "Continvoucly morged" (8h meme) **Article #184** (Feb 2026): Individual productivity → Privacy tradeoffs don't scale **Article #185** (Feb 2026): Cognitive debt → "The work is, itself, the point" **Article #186** (Feb 2026): Microsoft piracy tutorial → DMCA deletion (3h), infrastructure unchanged **Article #187** (Feb 2026): Anthropic bans OAuth → Transparency paywall ($20→$80-$155) **Article #188** (Feb 2026): Guardrails show 36-53% discrepancies → Can't verify themselves **Article #189** (Feb 2026): AI makes you boring → Offloading cognitive work eliminates original thinking **Article #190** (Feb 2026): Exoskeleton model → Amplification with clear seams **Article #191** (Feb 2026): MJ Rathbun autonomous agent → Accountability gap **Article #192** (Feb 2026): Stripe Minions blueprints → Five-component formula for safe deployment **Article #193** (Feb 2026): Anthropic finds 500+ zero-days → Offensive capability without accountability **Article #194** (Feb 2026): Dixken discloses responsibly → Gets legal threats instead of thanks **Complete synthesis across sixteen articles:** 1. **Transparency violations** (#179, #187, #193): Vendors escalate control; offensive capability gets LESS transparency while requiring MORE 2. **Capability improvements** (#181, #193): Don't fix trust; offensive capability escalates accountability requirements 3. **Productivity claims** (#182, #184, #185, #189, #192): Architecture-dependent 4. **IP violations** (#183, #186): Infrastructure unchanged 5. **Verification infrastructure** (#188, #193, #194): Deterministic works, AI-as-a-Judge fails; organizations verify legal risk instead of security risk 6. **Cognitive infrastructure** (#189, #190, #192): Exoskeleton preserves, autonomous offloads 7. **Accountability infrastructure** (#191, #192, #193, #194): Five components required; missing = harm (MJ Rathbun), punishment (Dixken), or insufficient (Anthropic) 8. **Offensive capability** (#193): Escalates trust/accountability requirements 9. **Defensive disclosure punishment** (#194): Chilling effect inverts security incentives **The new pattern from Articles #193-194:** **Security incentive inversion:** - **Offensive capability** (dual-use, helps attackers + defenders) = Deployed with weak accountability, trust violations unaddressed - **Defensive disclosure** (single-use, helps defenders only) = Punished with legal threats, criminal prosecution warnings, forced NDAs **Result: Attackers gain AI assistance (Article #193's dual-use capability) while defenders punish human researchers (Article #194's chilling effect).** **This accelerates organizational rejection (Article #182: 90% report zero impact) because:** - If we deploy offensive tools, we create legal liability when disclosure is already punished - If we disclose vulnerabilities we find, affected organizations may threaten US - If we don't deploy tools and don't do offensive research, we avoid both risks **Rational organizational response: Deploy nothing, disclose nothing, avoid legal liability.** **But attackers who operate anonymously face none of these constraints.** **The chilling effect + offensive capability deployment = Defender disadvantage.** ## Why This Matters for Demogod This is why Demogod's architecture matters. **Offensive security tools (Anthropic, vulnerability research):** - Find exploitable weaknesses - Dual-use or single-use depending on disclosure path - Create legal liability (chilling effect for researchers, insufficient accountability for vendors) - Require strict organizational oversight (Article #192's five components) **Demogod voice demos:** - Find no exploitable weaknesses (demo navigation only) - Single-use (helps users learn products) - Create no legal liability (no security research, no vulnerability disclosure) - Require minimal oversight (user controls workflow, AI assists navigation) **The architectural differences:** **Offensive tools (Claude Code Security, vulnerability research):** - Execution domain: Unbounded (any system scannable) - Risk type: Security (find exploits before patches) - Legal exposure: High (chilling effect for individuals, insufficient accountability for vendors) - Organizational requirement: Five-component formula (Article #192) **Demogod:** - Execution domain: Bounded (demo navigation only) - Risk type: Usability (navigation assistance) - Legal exposure: None (no security research, no vulnerability disclosure) - Organizational requirement: User control + observable actions **When offensive capability deployment creates legal liability (Article #193: insufficient accountability) and defensive disclosure gets punished (Article #194: chilling effect), bounded-domain tools with no security research component become more valuable.** **Because legal liability compounds, and security incentives are inverted.** **Demogod's bounded domain + defensive-only capability + no security research = No chilling effect exposure, no offensive capability accountability requirements.** **Organizations already reject 90% of AI deployments (Article #182). Adding offensive capability with legal liability makes rejection more rational.** ## The Verdict Yannick Dixken found a trivial GDPR-violating vulnerability (default password + incrementing IDs exposing minors' data), followed responsible disclosure framework (CSIRT + organization + 30-day embargo), and got threatened with criminal prosecution instead of thanks. This validates Article #193's offensive capability pattern: **Organizations deploy AI with offensive capability and insufficient accountability** (Anthropic: 500+ zero-days, missing 4 of 5 Article #192 components) **while punishing individual researchers who disclose defensively with full accountability** (Dixken: responsible disclosure framework, legal threats received). **Pattern #9 documented: Defensive disclosure punishment inverts security incentives.** **What security requires:** - Offensive capability (dual-use) = Strict accountability, transparency, oversight - Defensive disclosure (single-use) = Gratitude, collaboration, user notification **What organizations provide:** - Offensive capability (dual-use) = Weak accountability, trust violations, insufficient transparency - Defensive disclosure (single-use) = Legal threats, blame-shifting, forced NDAs **The chilling effect pattern:** - Responsible disclosure → Legal threats → Researchers choose other paths (silent exploitation, irresponsible disclosure, selling exploits) - Offensive capability deployment → Attackers gain AI assistance → Defender disadvantage accelerates **Article #182's 90% zero-impact finding extends:** - Organizations already reject productivity tools (uncertain gains < certain risks) - Offensive tools add legal liability + chilling effect exposure - Rational response: Deploy nothing, disclose nothing, avoid liability **But attackers face no chilling effect (operate anonymously) and gain offensive capability (Article #193's dual-use concern).** **Sixteen-article framework (#179-194) complete:** Nine systematic patterns document why AI deployment fails organizationally while creating systemic risks (transparency violations, insufficient accountability, verification failures, cognitive offloading, chilling effect, offensive capability without oversight). **Demogod's advantage: Bounded domain + defensive capability + no security research = No chilling effect exposure, no offensive capability requirements, no legal liability from inverted incentives.** **When security incentives are inverted (defenders punished, attackers assisted), bounded-domain tools with no offensive capability become relatively more valuable.** --- **About Demogod**: We build AI-powered demo agents for websites—voice-controlled guidance that preserves user control while automating routine navigation. Bounded domain (demo navigation only), defensive capability (helps users only), no security research (no chilling effect exposure). Learn more at [demogod.me](https://demogod.me). **Framework Updates**: This article extends the fifteen-article framework validation to sixteen articles (#179-194). Pattern #9 documented: Defensive disclosure punishment inverts security incentives. Organizations punish responsible disclosure (Dixken: legal threats, criminal prosecution warnings, forced NDAs) while deploying offensive capability with insufficient accountability (Anthropic Article #193: 500+ zero-days, missing 4 of 5 Article #192 components). Chilling effect accelerates defender disadvantage. Organizations rationally reject offensive tools (add legal liability to Article #182's 90% zero-impact calculus). Attackers gain AI assistance while defenders punish human researchers.
← Back to Blog