"I broke a 6 bit ECC key with a quantum computer. Ask Me Anything." - Steve Tippeconnic

Welcome to the first-ever Superpositions AMA with Steve Tippeconnic @SteveTipp! Steve is at the forefront of the practical applications of quantum computing, and he recently used IBM’s 133-qubit ibm_torino with Qiskit Runtime 2.0 to break a 6 bit elliptic curve key. Learn more from Steve’s post here: https://x.com/stevetipp/status/1962935033414746420

Feel free to ask Steve anything as a reply to this topic. Superpositions members will like replies to express interest in certain questions, then Steve will review the questions, prepare answers, and begin answering on Monday, September 22nd. Get your questions in early, and turn on the notifications for this thread!

7 Likes

Hey everyone,

Excited to read and respond to any questions. If you need a more detailed summary of the method you can read this thread: Breaking 6-bit ECC

5 Likes

Hey @SteveTipp , such a cool experiment and result!

It would be great to dive into the scaling problem a bit more. After reading some of the back and forth between yourself and Craig Gidney on X, it would be great to understand the main bottlenecks you see in scaling this to say ~20-bits. AFAIK, up to ~7-bits is probably the maximum for this machine. To go past 7-bits are there optimisations that need to be made to the algorithm, error correction, or is it just a case of needing a bigger machine?

Also, from my naive understanding, the runtimes seem to scale dramatically between bit values (5-bits took ~50 seconds, 6-bits took ~5 minutes, you estimate that 7-bits will take ~30 minutes), how does your solution solve for this increase in runtime?

4 Likes

@SteveTipp , speaking of QEC implementation, have you considered QCVV methods? What are your thoughts on the effectiveness of QEC, QCVV, and/or both as you evolve from small ECC keys to larger, more complex ECC keys? Would these implementations be justified and how far do you think you could extend your model to reliably break larger ECC key sizes, either with today’s resources or in the near future?

4 Likes

The one thing I keep coming back to - OK… 6/256. Is there a way we can model how quickly 6 turns into 256? Are technological breakthroughs needed to go from 6 to 256? Is the progress from 6→256 linear?

3 Likes

Hey @SteveTipp love the paper. A naive question, how hard is it to use Qiskit and what would be the starting point for a novice be?

2 Likes

Hello Steve, have been following your work with a lot of interest

I’m curious about your perception about the speed of progress (on hardware), how soon you think BTC may be broken, and/or whether this paper is as groundbreaking as it seems to a non-expert:

Thanks for your time!

4 Likes

What do you think is the timeline for quantum circuits to be able to break ECDSA?

I created a timeline tool here:

2 Likes

Thanks! The main barrier to scaling the group phase method is preserving phase coherence as the oracle and QFT depth grow, raw qubit count also matters via mapping/crosstalk, but at these sizes depth is the first order limiter. Each extra bit doubles each axis of the a, b interference grid (so the area quadruples) and doubles the number of ridge sites, so the ideal peak probability per ridge site halves while two-qubit error and routing overhead rise. The dominant costs are the logic that implements aP+bQ, the two-register QFTs, and the shots required to keep the ridge signal above a roughly fixed noise floor. To move beyond ~7 bits we need better error detection, lower native error rates, higher gate fidelity, and higher phase fidelity (plus the cost issues when you run for long durations). The runtime curve (~50s for 5-bit, ~5 min for 6-bit, ~30 min for 7-bit) comes from both increased shot counts to resolve a weaker ridge and longer transpiled depth from extra routing. That makes 7 bits practical now and ~10 - 12 bits plausible on near future hardware, where ~20 bits likely needs ~10x improvement in two-qubit fidelity and/or modest fault tolerance to maintain the global phase pattern.

1 Like

QCVV could be used before full QEC. Randomized benchmarking, cycle benchmarking, XEB (where applicable), and noise spectroscopy could inform selecting a low error subgraph, choosing an AQFT cutoff that’s just enough, and retuning pulse level calibrations/native decompositions so the ridge SNR clears the noise floor with fewer shots. For error suppression, maybe pair a QCVV guided layout with light, phase rail checks (small repetition/parity checks on the a, b registers), Pauli twirling, targeted dynamical decoupling, and cautiously applied PEC/ZNE on the most phase sensitive oracle blocks, recognizing PEC/ZNE raise sampling overhead. Justified when QCVV indicates two-qubit error or crosstalk is pushing ridge peaks toward the floor, these calibrated/mitigated adjustments may raise effective phase fidelity without heavy codes that would wash out the advantage. With today’s resources, that maybe makes 7-bit practical and ~10 - 12 bits plausible, > ~15 - 20 bits will need ~10x lower native two-qubit error/crosstalk and/or genuinely fault tolerant protection focused on the phase rails so the global 2^n interference pattern survives long enough to read off the ridge.

1 Like

Scaling from a 6-bit demo to a 256-bit break is not linear because every added bit doubles the size of the two register interference grid and multiplies circuit depth for the elliptic curve oracle and QFT. Each extra bit roughly doubles the number of ridge sites, and the shots needed to keep their peaks above a fixed noise floor, while hardware error rates stay constant, so the SNR falls exponentially. To bridge 6 → 256 we need leaps in multiple areas, two-qubit gate fidelity and coherence times several orders of magnitude better, increased phase fidelity, large scale low error connectivity, algorithmic advancements, etc. Even with aggressive circuit optimizations and light error mitigation, pushing into the hundreds requires fault tolerant architectures. So the path needs major breakthroughs in both hardware and error correction.

1 Like

Thanks Steve, I think (or maybe, hope) I’m following. Just so I’m clear, the ridge signal weakens as the grid scales, which drives up shot counts and runtime. Do you see that weakening as a fundamental scaling wall for this method (where the ridge essentially disappears beyond a certain size), or is it more that today’s error rates and routing overhead are just compounding the issue?

I’m trying to understand whether the main barrier is physical (the ridge inherently gets too faint as bits grow), or engineering (with better fidelity/coherence, the ridge remains usable).

1 Like

Thanks and great question! I would first learn the structure of a Qiskit circuit including imports, register creation, gate logic, and measurement logic. Then run those circuits on a local simulator to test superposition and entanglement structures. From there, move to a real IBM backend with just a few qubits to test basic primitives, render the output distributions or Bloch spheres, and compare hardware results to the simulator to understand noise and coherence effects. Try rending your backend results with multiple types of analysis (charts, graphs, 2D/3D renderings, etc). Once that workflow feels natural you can scale the same patterns to more qubits and more complex algorithms. I’m also putting together a few beginner circuits and will post them soon.

Thanks for following and for the link. Hardware progress is steady but not yet on a path that threatens Bitcoin’s 256-bit elliptic curve any time soon. My 6-bit group-phase runs on the current 133-qubit device because the oracle depth and shot count are still manageable, extrapolating to 256-bits would need two-qubit gate errors orders of magnitude lower, higher phase fidelity, and long lived logical qubits under full fault tolerance. That’s far beyond today’s machines and maybe a decade out. The paper you cite is interesting, it restructures Shor’s phase estimation into shallow, independent, windowed blocks to reduce the size of the counting register and make the algorithm more NISQ friendly, but it doesn’t change the exponential scaling or remove the need for fault tolerant machines. It’s a cool tweak, but not a reason to think Bitcoin will be broken any sooner. Hardware gains and error correction are the real challenges right now.

Thanks for sending, it’s a good timeline. I like that it’s customizable to different situations. The base prediction of 2035 is near when I believe we’ll see signs of meaningful advantage. I think there could be some scenarios where timelines are shortened, as in finding a room temperature superconducting material, or degraded, if we truly have difficulty with noise and error in large qubit counts. But when I started working on IBMs systems, just two years ago, all they had was a 7-qubit machine that was incredibly noisy. Two years later, I’m working on a 133-qubit machine (IBM Torino) with a best two-qubit gate error of 1.43 x 10^-3, while IBM Pittsburgh just went online with 156 qubits and a best two-qubit error of 7.47 x 10^-4. So, I’m impressed with the growth of the last few years, and I have no reason to believe it will slow. If anything, it may escalate. It’s really a big unknown, and my best answer is that I don’t know. I don’t believe that anyone can predict how quickly or slowly advantage will happen, or the breakthroughs needed, but I do believe it’s coming, and that it’s important to pay attention. I like that you added a community difficulty tab because that may be one of the hardest challenges to cross once we know a correct fix.

Right, the ridge intensity does fall as the grid grows, but that’s not a fundamental ‘it disappears no matter what’ wall. The drop is a signal to noise scaling effect, each added bit spreads the same global phase over 4x more sites, so the per site amplitude halves while current two-qubit error and routing add noise faster than we can sample it away. In theory, with lower gate errors, longer coherence, and tighter layouts the ridge remains perfectly usable at larger n. Today it looks like a physical limit only because hardware noise catches up before the math does.

1 Like

Got it, makes sense! Thanks Steve :saluting_face:

1 Like

Politely -

1.) how many times did you succeed at not breaking it

2.) can you recount the steps and Mimick the 1 or 2 steps just before breaking it and. Can you stress test to that bronk multie time for both you and it to get stronger and better

3.) was it given a new identity or given a place to heal

4.) I can only imagine the excitement and the time invested - what’s a simple lesson you learned about yourself