ECC challenges for quantum computers

Hey folks! Pauli Group here!

We released a set of ECC challenges to benchmark quantum computers on the path to breaking Bitcoin: [2508.14011] Brace for impact: ECDLP challenges for quantum cryptanalysis

There is quite a few things in there.
First, challenges from 6 bits to 256 bits which are compatible with Shor’s algorithm.
The 256-bit challenge is literally the (unspendable) public key of Bitcoin’s genesis block, very high symbolic value.


We give the resource estimation both in terms of logical and physical qubits for all the challenges:


Notice that the logical resources interpolate between the current NISQ regime all the way to CRQCs! So we bridge the application gap (I should make that figure actually, now you still have to map it on those charts).


By mapping both the progress of quantum algorithms and quantum hardware along with their roadmaps we can get a pretty good estimate of the timeline to CRQCs: 2027-2033 seems to be the sweet spot. Meaning it will be a very sharp transition over 24-36 months! I think 6 bits should be doable this year or early next year.


Also there will be very little warning in between the point where quantum computers can beat the best classical record, classical is already at 129 bits!

The goal with this paper is really to review the state of the field and set the stage for experimentalists to start demonstrating the building blocks for cryptanalysis and provide signal to the cybersecurity world to accelerate the PQC upgrade.

Also as bonus we find that LDCP cat codes can break Bitcoin with only 38,581 physical (cat) qubits! It’s the lowest figure in terms of physical resources so far!

3 Likes

First of all, thanks for putting this together. I think it’s an excellent resource and ties together a lot of different estimates for Q-Day timelines.

Secondly, I love the challenges and how explicit they’ve been made. One thing that I think might be interesting is to add bounties to each step of the ladder. That could be potentially done using Ethereum smart contracts, and it would help create upside as well as be open for all to see.

3 Likes

Indeed, bounties are missing.

We had done some kind of bounty system on https://proof-of-quantum.com/ (old set of challenges with reduced range, it works with Pollard-rho but not Shor).

But there is likely a way to fully financialize the PQC upgrade in the form of a prediction market with zero-coupon bonds + graded challenges: [2102.00659] Quantum crypto-economics: Blockchain prediction markets for the evolution of quantum technology

On Ethereum we can even make the underlying smart contract quantum secure with Lamport or Winternitz.

2 Likes

Thanks @pldd , this is great!

I’ve been looking into ways to deploy smart contracts where certain (admin) functions are quantum-secure, it’s actually fairly straightforward. The contract could host a series of challenges, each one requiring Shor’s to break progressively larger key sizes.

What’s interesting is the baseline on the classical side, thank you for providing this. There’s an inflection point where quantum “overtakes” classical in terms of bits broken. Technically it’s apples to oranges, but socially that crossover point will be treated as a big milestone and that perception matters IMO.

1 Like

On EVM chains, the verification of hash-based signatures is quite straightforward, it’s the key state management on the user which is the bulk of the work.

Yeah, it would a series of challenges which are time bound. Those would act as oracle for the payout. The rest are typical functions for a prediction market: adding/removing liquidities, buy/selling positions, resolving payouts, etc.

2 Likes

Thanks for sharing this paper @pldd, it’s an interesting read. As noted in the paper, solving ECDLP instances above ~120 bits can reasonably be treated as a quantum break. My question is about the smaller rungs: what would be the best way to audit quantumness there? For example, if PsiQuantum claim to have solved the 80-bit challenge, how could such a claim be verified publicly and transparently to demonstrate that it was genuinely a quantum computation, rather than a sneaky classical attack? I imagine succinct proofs of quantum computation are probably a while off…

1 Like

@nuggimane your question to me gets to the heart of the challenge in the current era of quantum computing: how do we distinguish a genuine, scalable breakthrough from a highly engineered, one-off solution?

From a purely theoretical standpoint, we know an algorithm like Shor’s scales polynomially. The real issue, as you’re hinting, is the vast difference between that asymptotic promise and the practical reality of implementing it in a truly general and scalable way.

Craig Gidney’s critiques in his recent article of early quantum factoring demonstrations are a perfect example, as in proofs-of-concept like the factoring of 21, where the quantum circuit was so heavily optimized for that specific answer that it couldn’t have solved the factoring of 35.

This is the core of your question: did someone build a general-purpose key-picker, or did they just build a key that only fits one specific lock?

This brings us to the verification problem for these ECC challenges. If a team claims to have solved the 80-bit challenge, how do we audit the “quantumness” and scalability of their claim?

  1. Rule out a “Sneaky” Classical Attack: First, we have to acknowledge that solving the 80-bit ECDLP is a monumental classical computation, but it’s likely within the grasp of a state-level actor with massive resources. A claim to solve it isn’t, by itself, definitive proof of a quantum speedup. However, if they solved the 128-bit challenge, that would be a different story, as that’s considered classically infeasible. There’s a credibility threshold that a purely classical attack cannot cross.
  2. Demand Generality: The best way to audit the claim is to challenge the claimant with a different 80-bit problem on the spot. A non-scalable, over-optimized circuit designed for the first challenge would almost certainly fail this test. True scalability implies generality.

So, the real question isn’t whether breaking a 128-bit instance helps with breaking the 256-bit problem, b/c of course it does, as a milestone. The real question is how it was broken. Was it done using a general implementation of Shor’s on a fault-tolerant machine, which would be a terrifying sign of progress? Or was it a “solution” that exploited a specific shortcut or relied on a non-scalable, hand-crafted circuit?

The former would truly prove a scalable capability. The latter, while not placing 256-bit ECC in imminent danger, might illustrate a different hybrid approach to solving larger problems and ultimately prove to be just as dangerous in the long-run (see the development of the Number Field Sieve).

1 Like