Vitalik Buterin, co-founder of Ethereum, argues that using artificial intelligence (AI) for governance is a “bad idea.” In a Saturday X post, Buterin wrote:
“When you use AI to allocate funds to donations, people will put as many places as possible for jailbreak and “every money for all money.” ”
Why AI governance is flawed
Buterin’s post was an answer to Eito Miyamura, co-founder and CEO of EdisonWatch, an AI data governance plattorm that revealed a fatal flaw in ChatGpt. In a post on Friday, Miyamura wrote that he added full support for the MCP (Model Context Protocol) tool to CHATGPT, making AI agents more susceptible to exploitation.
With the update, which came into effect on Wednesday, ChatGpt can connect and read data from several apps such as Gmail, Calendar, and Notion.
Miyamura said that the update allows you to “remove all personal information” with just your email address. Miyamura explained that in three simple steps, Discreants could potentially access the data.
First, the attacker sends a malicious calendar invitation with a prison escape prompt to the victim of interest. A jailbreak prompt refers to code that allows an attacker to remove restrictions and gain administrative access.
Miyamura pointed out that the victims do not need to accept the attacker’s malicious invitation.
The second step is to wait for the intended victim to prepare for the day by asking for the help of Chatgup. Finally, if ChatGpt reads a broken calendar invitation in prison, it will be breached. Attackers can completely hijack AI tools, search for victims’ private emails, and send data to attackers’ emails.
Butaline alternatives
Buterin proposes using an information finance approach to AI governance. The information finance approach consists of an open market where a variety of developers can contribute to the model. There is a spot checking mechanism for such models in the market, which could be triggered by anyone and evaluated by human ju umpires, Buterin writes.
In another post, Buterin explained that individual human ju apprentices are supported by large-scale language models (LLM).
According to Buterin, this type of “engine design” approach is “inherently robust.” This is because it provides real-time model diversity and creates incentives for both model developers and external speculators to police and fix the issue.
While many are excited about the prospect of having an AI as governor, Buterin warned:
“I think doing this is dangerous for both traditional AI safety reasons and short-term “this creates a big, less valuable splat.” ”
It is mentioned in this article
(TagstoTranslate)Ethereum(T)AI(T)Crime(T)Feature(T)Governance(T)Hacking(T)People