Major AI companies are not endorsing crypto trading bots. There are no Frontier Labs training models for that. But more and more traders are building automated Polymarket bots using Anthropic’s Claude and making millions of dollars in profits. A viral thread suggests that anyone can do it.
But the most vocal winners use strategies that quantitative funds can replicate overnight.
Three assumptions, zero guarantees
This story is based on three assumptions. Big tech companies will eventually build purpose-built trading models. Individual traders can maintain an advantage over institutional investors. Autonomous AI agents can reliably earn money on the public market.
Haseeb Qureshi, managing partner at Dragonfly Capital, disagrees on all three points. In an interview with Bankless, he pointed to liability risks, market structure, and the commoditized nature of AI. Together, these forces make this gold rush much less promising than it seems.
the responsibility trap
Qureshi says building AI for blockchain tasks is technically easy. The EVM simulator allows you to easily test loop lending and token swaps. The model is competent. They just haven’t been pointed out about cryptocurrencies.
The reason is not technical but institutional. First, cryptocurrencies come with reputational issues that the AI Institute doesn’t want to get involved with. “Cryptocurrency is a little disgusting,” Qureshi said.
But the real barrier is responsibility. Imagine that Claude makes a bad leverage trade and loses $2 million. Or accidentally send $10,000 to a burner address. No disclaimer is loud enough to prevent backlash.
“It’s 100% going to happen,” Qureshi said. “Anyone who has had a bad experience, it’s going to be very widespread.”
He likened managing users’ crypto wallets to injecting unregulated Chinese peptides. The negative side dwarfs the positive side of the bottom line. It’s embarrassing when coding advice is wrong. If the wallet is empty, there will be a lawsuit.
Anthropic has already published research on AI and blockchain. The SCONE bench study tested how well frontier models can exploit vulnerabilities in smart contracts. However, this is a cybersecurity study, not a product roadmap.
The inflection point will come from competition. Training begins when a lab decides that the volume of its cryptocurrencies is too strategic to hand over to a rival. Until then, silence.
jane street problem
Even without big technology, the trading narrative faces structural barriers. Strategies built on publicly available models are, by definition, available to everyone, including institutional quantitative firms.
Qureshi’s point is simple. If the basic Claude bot can find profitable trades on Polymarket, Jane Street can execute 5,000 trades at the same time. The company has faster infrastructure and deep capital. A profitable edge can be scaled to zero before a retail trader even logs in. “If it’s a raw model, Jane Street is doing it right now,” he said.
The only way a retail bot can win is if there are no new signals in the base model. Claude instances pointing to the API do not.
Why “going to make money” doesn’t work
Qureshi expanded the discussion beyond transactions to the broader fantasy of autonomous AI agents generating income for themselves.
The first option is to get hired, or have an AI agent sell your labor. But it’s not economically possible. There are millions of identical clone instances. No one has unique skills or location advantages. Hiring an AI agent is the same as purchasing Anthropic computing with an extra step. No reasonable purchaser would pay more than Anthropic’s own API price for the same output.
The second option is to start a business. Although this seems more promising, Qureshi argued that it fails for more subtle reasons. All AI agents draw ideas from the same pool of training data. As a result, they all converge to the same general plan. If you ask 10 Claude instances for their startup ideas, you’ll get 10 variations of the same pitch.
Qureshi said true entrepreneurship requires what Peter Thiel called “earned secrets.” These are insights born out of specific experiences at specific times and locations. Bankless built its brand because its founders had a unique combination of cryptocurrency expertise, storytelling, and community instincts. They made it happen at exactly the right moment. Freshly spun Claude has no life experience to draw from. There are no acquired secrets.
This leads to unpleasant conclusions. AI agents cannot win at trades. they cannot be employed. Unable to come up with original business ideas. So what is their real advantage over humans? Qureshi’s answer was deliberately provocative: “Criminal.” This is not a future Qureshi welcomes. Once you remove all the institutional guardrails, that’s where the logic goes.
What this means
The traders building Polymarket bots are real. At this point, some gains may also materialize. But institutional quantitative firms will arbitrage away the alpha of the basic model. Big tech companies won’t train on cryptocurrencies until competition forces them to. And autonomous agent economies may find the first viable models beyond the reach of law enforcement.
This is implicitly understandable to the average trader reading headlines about AI bots minting millions of dollars. The house always wins. AI Trading runs 5,000 bots with sub-millisecond latency.
The post Why experts don’t recommend AI trading bots first appeared on BeInCrypto.

