The current state of quantum computing and what it will take to threaten Bitcoin
Although quantum computing has made significant advances in the past 18 months, the field is still transitioning from noisy hardware to early fault tolerance.
The key shift is from raw physical qubit counts to logical qubits, gate fidelity, runtime, and error correction. This change is important for Bitcoin because risk estimation is driven by logical qubits and fault-tolerant operations rather than the aggregate hardware.
What is the real state of progress in quantum computing?
Progress is being made on three fronts: subthreshold error correction, demonstration of small-scale logic qubits, and deeper circuits with lower noise.
In late 2024, Google’s Willow chip demonstrated subthreshold error correction, with error rates decreasing as the encoded system scaled up. IBM said its current system can run certain circuits with more than 5,000 two-qubit gates, and announced a roadmap to a 200-logic qubit fault-tolerant system by 2029.
Quantinuum reports 48 error-corrected logical qubits and 64 error-detected logical qubits from 98 physical qubits, plus 50 error-detected logical qubits on Helios with above-breakeven performance. Microsoft and Atom Computing reported calculations using 24 entangled logical qubits and 28 logical qubits on neutral atomic hardware.
There is still a lack of large-scale fault-tolerant machines in this field. This is one reason DARPA’s Quantum Benchmarking Initiative exists.
The goal is a quantum computer whose computational value exceeds cost by 2033, and the agency is still validating competing architectures rather than certifying that any team has already reached that point.
What can quantum computers do today?
Today’s systems can reliably do four things: You can perform benchmark problems that go beyond traditional brute force methods, such as Google’s recent work on random circuit sampling and quantum echo.
Limited and specialized simulations in physics and chemistry can often be performed in hybrid workflows with traditional high-performance computing. They can demonstrate logical qubits and fault-tolerant subroutines on a small scale. It also serves as a testbed for error correction, decoding, and control systems.
What they can’t do today is an important part of Bitcoin.
No public system can match the number of logical qubits, fault-tolerant gate budgets, or sustained execution times required for cryptographic-related attacks against secp256k1. Google’s Willow contains 105 physical qubits.
Major public demonstrations of logical qubits remain in the dozens, not the thousands. Recent estimates by Google researchers and co-authors indicate that Bitcoin-related attacks fall into the following range: 1,200 to 1,450 logical qubits and tens of millions of Toffoli gates; There is a huge gap between current machines and cryptographic related systems.
What does it take from here to create a quantum computer that can crack Bitcoin on some level?
An important threshold is a cryptographically relevant quantum computer that can run Scholl’s algorithm for elliptic curve discrete logarithm problems in secp256k1.
According to a March 2026 Google paper, it is possible in principle to solve ECDLP-256 with less than 1,200 logical qubits and 90 million Toffoli gates, or with less than 1,450 logical qubits and 70 million Toffoli gates.
Under 10 superconducting assumptions-3 The authors estimate that such an attack can be performed in minutes using fewer than 500,000 physical qubits, given physical error rates and planar connectivity.
That creates engineering problems. The path forward is not just a linear climb from about 100 physical qubits to 500,000 qubits. The more difficult challenge is to build large numbers of stable logical qubits, sustain tens of millions of fault-tolerant operations, achieve fast cycle times, and integrate it all with real-time decoding, cryogenic or photonic interconnections, classical control, and manufacturable modules.
The paper argues that fast clock systems such as superconducting and photonic platforms are more susceptible to on-spend attacks than slower clock systems such as ion traps and neutral atoms. This is because the execution time can be deterministic within the memory pool window.
In the case of Bitcoin, “crack at some level” does not mean destroying the network all at once. The initial risks are recovering the private key from the public key or attacking the spend while the public key is visible.
In its research disclosure on crypto vulnerabilities, Google states that blockchains relying on ECDLP-256 require a post-quantization migration path and mentions short-term mitigation measures, such as avoiding exposing or reusing vulnerable wallet addresses.
Are Google’s recent predictions for 2029 really realistic?
This question requires a distinction. In Google’s own words, 2029 is a post-quantum goal, not a final date for a Bitcoin-decrypting machine.
On March 25, 2026, Google announced a timeline for post-quantum cryptography transition to 2029, citing advances in hardware, error correction, and resource estimation.
The company said in a March 31, 2026 research post that future quantum computers could break elliptic curve cryptography used in cryptocurrencies with fewer qubits and gates than previously estimated. These are related claims, but they are not the same.
Although 2029 appears to be a bullish transition deadline, protection is possible. Public evidence remains scant for making grim predictions about Bitcoin’s ability to be cracked.
Google has significantly reduced its attack estimates and IBM has published a 2029 roadmap to 200 logical qubits and 100 million gates. Still, IBM’s 2029 target is still significantly lower than Google’s latest logical qubit estimate for attacks on secp256k1.
DARPA’s utility size benchmark range extends to 2033, which is a more conservative reference point. Current evidence suggests that 2029 serves more as a preparation date than as a firm date for Q-Day.
How much will it cost to get there?
No one has released a final public budget for a quantum computer to crack Bitcoin. The most powerful social signals come from funding, government policy, and facility construction. PsiQuantum has raised $1 billion for a utility-scale fault-tolerant system in 2025 and secured a separate A$940 million public package in Australia for construction in Brisbane.
Quantinuum raised approximately $300 million in early 2024 and later announced further funding rounds in 2025. Illinois has also reportedly finalized a $500 million quantum park plan and $200 million in tax incentives centered around the Chicago site associated with PsiQuantum.
A reasonable inference is that first-generation cryptographic systems cost in the low billions of dollars, potentially much more if you include the entire campus, specialized manufacturing, packaging, cryogenics, classical computing, networking, control electronics, and multi-year labor costs.
Public and private capital are already converging on that scale. This is currently an infrastructure scale build.
What milestones should we focus on from here?
of first milestone This is a transition from tens to hundreds of high-fidelity logical qubits that maintain stability long enough to run meaningful programs.
The next threshold after that is whether those logical qubits can support millions to tens of millions of fault-tolerant gates with real-time decoding and manufacturable scaling. IBM’s public roadmap has Starling in 2029 with 200 logical qubits and 100 million gates, followed by Blue Jay in 2033 with 2,000 logical qubits and 1 billion gates.
of second milestone This is architecture verification. Google’s attack resources document points to fast clock architectures as the systems most relevant to on-spend crypto attacks. This puts more emphasis on advances in superconducting and optoelectronic systems when assessing the short-term risks of Bitcoin.
of third milestone Independent verification. DARPA’s QBI and US2QC programs are important because they force companies to convert their roadmaps into auditable engineering plans. Microsoft and PsiQuantum have already moved into the final validation and co-design phase of US2QC, while IBM, Quantinuum, Atom, IonQ, QuEra, Xanadu, and others remain in Stage B of QBI.
If one of these programs concludes that the design can be built as intended, it has more significance than a standard corporate roadmap.
of 4th milestone is the cryptographic response. NIST says it expects to complete the first three post-quantum cryptographic standards in August 2024, and that organizations should start transitioning now, with vulnerable algorithms expected to be deprecated and removed by 2035. A trusted migration path would significantly change the risk profile for Bitcoin and the broader crypto stack.
Who is most likely to create a quantum computer first?
The answer depends on your definition of “first.” If the benchmark is the first public fault-tolerant system with meaningful logical qubit scale, then IBM and Quantinuum currently have the strongest public claims.
IBM has the clearest long-term public roadmap for hundreds or even thousands of logical qubits. Quantinuum has some of the most powerful publicly available data on trapped ion logic qubits and break-even points.
If this benchmark is the first independently validated route to business scale, Microsoft and PsiQuantum stand out because they have already been moved by DARPA into the final validation and co-design phase of US2QC. While this does not settle the race, it does indicate that in the government’s serious review process, these paths are considered mature enough for deeper scrutiny at a systems level.
If the benchmark is the first system that seems to be related to Bitcoin, then the platform with a fast clock is most noteworthy. Current published evidence indicates that superconducting and photonic stacks are better suited than trapped ion or neutral atomic systems for initial on-spend attack capabilities.
This keeps topological paths from Google, IBM, PsiQuantum, and potentially Microsoft in the most visible group, while leaving room for surprises from other DARPA-backed architectures.
What would it take for malicious parties to use such a machine once top research institutions have proven its capabilities?
Barriers will remain extremely high. Malicious attackers need access to facilities-scale systems, specialized supply chains, advanced control electronics, packaging, cryogenics, or large-scale photonic infrastructure, error correction software, compilers, and teams spanning quantum hardware, error correction, systems engineering, and cryptography.
The cost profile will probably still be in the billion dollar range, and the engineering footprint will be hard to hide. This means that the first credible threats are directed at exploiting the capabilities of states, state-sponsored programs, or existing top-tier laboratories rather than independent criminal organizations.
There is also a second tier of difficulty. Even after top labs demonstrate theoretical capability, turning it into reliable exploitation requires stable execution times, sufficient machine availability, targeted intelligence, and a way to operationalize the results before defenders can complete the transition.
In its responsible disclosure, Google withheld details of the attack and used zero-knowledge techniques to verify its claims without disclosing its operational strategy. This increases the barrier to reckless replication.
The clearest historical comparison of “research-level computing breakthroughs and fraudster capabilities” is DES.
In 1977, Whitfield Diffie and Martin Hellman argued that a machine capable of brute force attacking DES in about a day would cost approximately $20 million, and that that capability would be in the hands of the state.
By 1998, the Electronic Frontier Foundation built a deep crack for less than $250,000 that cracked DES in 56 hours.
By 2006, the FPGA-based COPACOBANA machine had pushed its cost down to less than $10,000, marking the transition of capabilities once discussed at national laboratory scale into the realm of commercially available specialized hardware.
The pattern is more important than the exact cipher. Cryptbreaking ability often appears first as an elite budget possibility, then as public proof, and only later as something that can be assembled from accessible components at a much lower cost.
In the case of Bitcoin, the key question is not only when top research institutions can demonstrate cryptographically relevant quantum attacks, but also how long it will take for that capability to move down the cost curve to something that small-scale attackers can realistically access and manipulate.
So even if Google develops a quantum machine, cracks can form Bitcoin in 2029 may not be accessible to malicious parties for another 30 years or more, according to the DES timeline.
conclusion
Bitcoin is currently not subject to quantum attacks. This threat has moved from the science fiction category to the planning category.
Google’s new estimates reduce the required resources enough to clarify the central question of whether fast-clock, fault-tolerant systems can migrate Bitcoin and the broader crypto stack before they cross the threshold of crypto-related attacks.
Even if top research institutions reach that threshold sooner than expected, access is likely to be the limiting factor for malicious actors. Because the first cryptographic systems will still be facility-scale machines with multibillion-dollar economics, rather than tools that can be secretly purchased, rented, or assembled on a criminal scale.
Yes, you need a Bitcoin migration plan. Yes, it’s worth starting sooner rather than later. But no, your wallet won’t be cracked anytime soon and your BTC won’t be stolen by a quantum computer. To be honest, it probably won’t happen in our lifetime.
Once Frontier Labs has a quantum computer capable of decoding Bitcoin, prices are likely to skyrocket based on sentiment if the transition is not completed, but it will still be decades before on-chain data is truly at risk.
(Tag translation) Bitcoin

