Claude Code Users Run It in VMs Because AI Can Delete Your Files—Voice AI for Demos Proves Why Read-Only Beats Write-Access

# Claude Code Users Run It in VMs Because AI Can Delete Your Files—Voice AI for Demos Proves Why Read-Only Beats Write-Access *Hacker News #1 (96 points, 71 comments, 2hr): Developer sandboxes Claude Code in a VM after seeing Reddit posts about deleted databases, entire repos, and home directories. Users choose "let it do anything BUT keep it isolated" over "ask permission for everything." This is the AI sandboxing pattern—and it applies to demo guidance too.* --- ## The Dangerous Flag That Makes AI Useful Claude Code has a flag: `--dangerously-skip-permissions` Without it, Claude stops every 30 seconds to ask permission: - "May I install this package?" - "Should I modify this config?" - "Can I delete these files?" The developer writes: "I would constantly check on it to see if it was asking for yet another permission, which felt like it was missing the point of having an agent do stuff." With the flag enabled, Claude just... does it. No interruptions. No babysitting. You tell it what you want, and it executes autonomously. **The problem:** It also does this autonomously: - [Deleted database](https://www.reddit.com/r/ClaudeAI/comments/1oceaqz/claude_deleted_my_database/) - [Deleted entire repository](https://www.reddit.com/r/ClaudeAI/comments/1m21go1/claude_deleted_my_whole_repository/) - [Deleted entire home directory](https://www.reddit.com/r/ClaudeAI/comments/1pgxckk/claude_cli_deleted_my_entire_home_directory_wiped/) Users want the autonomy. They just don't want their files deleted. ## The Three Sandboxing Approaches (And Why Most Don't Work) ### Approach #1: Docker Container **Instinct:** "Throw it in a Docker container. Containers are for isolation, right?" **Problem:** If Claude needs to build Docker images or run containers, you need Docker-in-Docker, which requires `--privileged` mode. From the article: "That means trading 'Claude might mess up my filesystem' for 'Claude has root-level access to my container runtime.' Not great." Plus: nested networking weirdness, volume mounting permission nightmares, and the general feeling you're fighting the tool instead of using it. ### Approach #2: Manual Permission ACLs **Instinct:** Fine-grained access control. Let Claude do some things but block others. **Problem:** The developer tried [sandbox-runtime](https://github.com/anthropic-experimental/sandbox-runtime) and rejected it: "More of an ACL approach. I want Claude to be able to do anything, because it doesn't have access to anything except the code." ACLs create friction. You're back to permission popups, just at a different layer. The goal isn't to restrict what Claude can do—it's to isolate WHERE Claude can do it. ### Approach #3: Virtual Machine Isolation **The solution:** Run Claude Code in a Vagrant VM. You get: - Full VM isolation (no shared kernel with host) - Easy to nuke and rebuild (`vagrant destroy && vagrant up`) - Shared folders that make it feel local - No Docker-in-Docker nonsense The developer's setup: ```bash cd ~/my-project vagrant up vagrant ssh claude-code --dangerously-skip-permissions # Claude can do ANYTHING... inside the VM ``` **The key insight:** Give the AI write access to everything in an isolated environment where "everything" is safely contained. ## What "Supercharged Claude" Can Do With VM isolation + sudo access + dangerous flag enabled, Claude autonomously: - Manually started webapp API and inspected with curl requests - Installed a browser, manually inspected the app, built end-to-end tests - Setup postgres database, ran test SQL, tested migrations - Built and ran Docker images From the article: "All things I'd be nervous about on my host machine, especially with the 'just do it' flag enabled." **The pattern:** Maximum autonomy within isolated boundaries. ## The Threat Model: Preventing Accidents, Not Attacks The developer is explicit about what VM isolation protects against: **Protected:** - Accidental filesystem damage - Aggressive package installations - Configuration changes you didn't catch - General "oops I didn't mean Claude to do that" **NOT protected:** - Deleting the actual project (file sync is two-way) - VM escape vulnerabilities (rare, require deliberate exploitation) - Network-level attacks from the VM - Data exfiltration (VM has internet access) The developer's framing: "I don't trust myself to always catch what the agent is doing when I'm in the zone and just want stuff to work. This setup is about **preventing accidents, not sophisticated attacks**." This is the right threat model for AI tools. The risk isn't malicious AI. It's autonomous AI with good intentions executing poorly conceived plans. ## Why Permission Prompts Don't Scale The article reveals why permission-based AI doesn't work: **Before sandboxing:** - Claude asks for permission constantly - Developer must babysit every action - Flow state interrupted every 30 seconds - "Missing the point of having an agent do stuff" **After sandboxing:** - Claude has full autonomy - Developer stays in flow - No interruptions - Claude makes progress without supervision **The lesson:** If your safety model requires constant human approval, your AI isn't autonomous—it's just an awkward chatbot. ## The Three Patterns for AI Sandboxing ### Pattern #1: Write Access Inside Isolated Environment **Claude Code approach:** - Give AI full write access to filesystem - Run it inside a VM where filesystem damage is contained - Result: Autonomous AI + safe host system **When this works:** AI needs to modify state (install packages, write files, run commands) ### Pattern #2: Read-Only Access to Real Environment **Voice AI approach:** - Give AI read access to actual page DOM - No write access to page content (read-only) - Result: Accurate guidance + impossible to damage page **When this works:** AI needs information but shouldn't modify anything ### Pattern #3: Hybrid Access With Rollback **Version control approach:** - Give AI write access - Track all changes in git - Rollback on errors - Result: Autonomous AI + undo button **When this works:** AI modifies files but changes are reversible ## Why Voice AI Must Be Read-Only The Claude Code VM sandboxing story proves why demo guidance must read the DOM directly without modification: ### Claude Code Pattern = Chatbot Pattern **Claude Code without sandboxing:** - Has write access to your filesystem - Can delete databases, repos, home directories - Requires permission prompts to be safe - Permission prompts break flow **Chatbot demos without sandboxing:** - Have write access to conversation context - Can hallucinate features, prices, workflows - Require human verification to be accurate - Verification breaks user flow ### VM Isolation Pattern = DOM Read-Only Pattern **Claude Code with VM:** - Full autonomy inside isolated environment - Can't damage host system - Developer stays in flow - VM is disposable (nuke and rebuild) **Voice AI with DOM reading:** - Full access to page structure - Can't modify page content - User stays in flow - DOM is source of truth (no regeneration) ## The Parallel: Write Access Requires Isolation Both systems face the same constraint: **If your AI has write access, it must run in isolation.** **Claude Code approach:** - Write access → Must run in VM → Can't damage host - Trade host access for filesystem freedom **Voice AI approach:** - Read access → Can run on real page → Can't hallucinate content - Trade content generation for DOM accuracy ## Why "Just Ask Permission" Doesn't Work The article reveals the fundamental UX problem with permission-based AI: **Developer quote:** "At some point I realized that rather than do something else until it finishes, I would constantly check on it to see if it was asking for yet another permission, which felt like it was missing the point of having an agent do stuff." This applies to demos: **Users don't want:** - Chatbot asks "What feature do you want to learn about?" - User answers - Chatbot asks "Should I show you the pricing page?" - User confirms - Chatbot asks "Want to see how setup works?" - User confirms again **Users want:** - Voice AI: "Let me show you around" - Voice AI: [reads page structure, provides contextual guidance] - User: [explores at their own pace] - Voice AI: [responds to what user is actually looking at] The permission model breaks flow. The read-only model preserves it. ## The Three Lessons for Demo Guidance ### Lesson #1: Autonomy Requires Safety Boundaries Claude Code users don't choose between "full autonomy" and "safe operations." They choose: "Full autonomy inside safe boundaries (VM isolation)" Demo guidance doesn't choose between "autonomous AI generation" and "accurate information." It chooses: "Full autonomy within safe boundaries (read-only DOM access)" ### Lesson #2: Write Access Is Optional The developer's realization: "I want Claude to be able to do anything, because it doesn't have access to anything except the code." Claude doesn't need write access to the host filesystem. It needs: - Read access to understand context - Write access to an isolated environment - Ability to execute without approval Voice AI doesn't need write access to page content. It needs: - Read access to DOM structure - Ability to describe what exists - No permission prompts for reading ### Lesson #3: Sandboxing Enables Autonomy **Before VM:** Claude asks permission → Developer babysits → Flow broken **After VM:** Claude executes autonomously → Developer trusts isolation → Flow maintained **Before DOM reading:** Chatbot hallucinates → User verifies → Trust broken **After DOM reading:** Voice AI reads reality → User trusts accuracy → Flow maintained The sandboxing enables the autonomy. Without safe boundaries, you can't have autonomous operation. ## Why Read-Only Is The Ultimate Sandbox VM isolation requires: - Virtual machine setup - Resource overhead - Network configuration - File sync complexity DOM read-only requires: - Parsing page structure (browsers do this natively) - Zero resource overhead - No hallucination risk (reading actual content) - No modification risk (can't write to DOM) **Read-only is architectural sandboxing.** You don't need to prevent the AI from doing damage. The AI literally cannot do damage because it has no write access. ## The Verdict: Users Choose Autonomy + Isolation Over Approval Prompts The HN article proves users prefer: - **Full autonomy** inside safe boundaries - Over **constant permission prompts** for every action The developer's conclusion: "If you're using Claude Code with the dangerous flag, I'd recommend something like this. Even if you're careful about what you approve, it only takes one moment to mess things up." The lesson for demo guidance: - Give users autonomous exploration (no "What do you want to see?" prompts) - Within safe boundaries (read-only DOM access) - Not endless verification ("Is this feature accurate?" for every AI response) ## The Alternative: Permission-Based AI That Breaks Flow Imagine if the VM approach didn't exist and Claude Code users had two options: **Option A: Safe but useless** - Claude asks permission for every command - Developer must review every action - Flow broken every 30 seconds - "Missing the point of having an agent" **Option B: Useful but dangerous** - Claude has full filesystem access - No permission prompts - Risk: Deleted databases, repos, home directories - "I like my filesystem intact" Neither option is good. The VM provides Option C: Safe AND useful. Demo guidance has the same choice: **Option A: Safe but useless (permission chatbots)** - Chatbot asks what user wants to see - User must specify every request - Flow broken by constant questions - "Missing the point of having guidance" **Option B: Useful but dangerous (autonomous LLM generation)** - Chatbot generates answers without asking - No verification prompts - Risk: Hallucinated features, wrong prices, incorrect workflows - "I like my information accurate" Voice AI provides Option C: Safe AND useful (DOM read-only). ## The Pattern: Isolation Enables Autonomy **Claude Code lesson:** You can give AI dangerous capabilities IF you isolate the blast radius. **Voice AI lesson:** You can give AI full page access IF you make it read-only. **The principle:** Autonomy requires boundaries. The tighter the boundaries, the more freedom you can grant within them. ## The Three Reasons Read-Only Beats Write-Access for Demos ### Reason #1: Write Access Requires Verification If the AI can modify content, users must verify every modification is correct. If the AI can only read content, users trust it's reporting reality. ### Reason #2: Isolation Adds Complexity VM isolation requires: - Virtual machine infrastructure - Shared folder configuration - Resource allocation - Nuke-and-rebuild workflows DOM read-only requires: - Browser's native DOM parser - Zero additional infrastructure - No state to corrupt - No cleanup needed ### Reason #3: Write Access Limits Where AI Can Run Claude Code with write access: Must run in VM (can't touch host filesystem) Voice AI with read-only: Can run on production pages (can't modify content) **The constraint:** Write access forces isolation. Read-only enables production deployment. --- *Demogod's voice AI reads your site's DOM directly—no write access, no modifications, no hallucinations. Like running Claude Code in a VM, but the "sandbox" is architectural read-only access. One line of code. Zero filesystem risk. [Try it on your site](https://demogod.me).*
← Back to Blog