On April 2, Vitalik Buterin published an entry on his personal blog detailing his “local and sovereign” artificial intelligence (AI) configuration. In the text, the Ethereum co-founder points out security flaws in the widely used AI agent, focusing on OpenClaw, currently the fastest growing GitHub repository in history.
Buterin claims that much of the AI ecosystem (even the open source part) is “totally ignored” when it comes to privacy and security. Beware of these agents Ability to modify own system prompts without user approvala malicious web page could take control of the agent and command its execution. script external. It also shows that there is plugin Silently sends user data to third-party servers, approximately 15% plugin What he analyzed contained malicious instructions.
Against this backdrop, Buterin is concerned that at a time when privacy was advancing with end-to-end encryption and local software, it is becoming the norm. Feeding data about people’s private lives to AI in the cloud. Their answer is a configuration that runs the language model entirely locally, without the use of a remote server. However, he makes it clear that his proposal is a starting point, not a complete solution.
Anxiety from before
This is not the first time Buterin has spoken out about the risks of AI. As reported by CriptoNoticias, in September 2025, developers warned that AI-based governance was opening the door to manipulation. If the system allocates funds automatically, users may try to jailbreak and trick the system to obtain an unfair advantage.
In March 2026, he said that using AI to speed up programming does not guarantee more secure code. vibe coding I was able to build a version of road map Ethereum in a few weeks 2030However, there are significant errors and incomplete components.
The April 2 publication extends the scope of its analysis to the everyday use of AI agents. The problems Buterin identified are already known to traditional security researchers, and while they remain unresolved, they show that the flaws are not new to the field. This takes smart contract failure into account. Things programmed by AI are already starting to wreak havoc.Such as the Moonwell scandal, where a flawed contract programmed by AI and approved by humans led to a hack worth over $1.7 million.
(TagTranslate)Inteligencia Artificial (AI)

