AI Should Not Access Personal Data Freely: Local-First and Sandbox Needed for Safety

Vitalik Buterin Lays Out Vision for Local, Private, Secure AI

Ethereum co-founder Vitalik Buterin just shared his vision for a more private and secure way to use AI locally. He argued that today's AI world—including local open-source AI—is far too lax on privacy and security. Examples? OpenClaw agents can modify critical settings without human approval. Malicious inputs can easily take over user instances. And some skills come with hidden malicious instructions.

Vitalik's take? All LLM inference and files should be local-first, and everything should be sandboxed. He's been testing hardware like NVIDIA 5090 laptops and AMD Ryzen AI Max Pro, running the Qwen3.5:35B model via llama-server on NixOS. He uses "pi" as an agent framework and bubblewrap sandboxing to lock down LLM access. He also built a message-passing daemon that strictly limits the LLM to reading messages and sending them to itself—sending to others requires human approval.

artificial-intelligence-new-technology-science-futuristic-abstract-human-br.png

His thinking: humans and LLMs fail in different ways. A double-check mechanism combining both is safer than relying on either alone. He called for layered defenses—zero-knowledge API calls, mixnets, TEE inference, and input sanitization. He even suggested making every paid API a ZK-API. If done right, he said, AI could build a more powerful future for privacy and security.

Recommended reading: