What is the current state of the art on resources required to break ECDSA versus RSA? Even though prior literature has described ECDSA as being somewhat (an order of magnitude) easier to break, has anyone actually implemented a circuit to do it?
I consider the recent work by Mosca et al to be the most up-to-date in terms of research estimation: https://www.sciencedirect.com/science/article/pii/S0167739X24004308
The estimate he provides is approximately an order of magnitude less work required to break ECDSA (P-256) vs RSA-2048. Ironically, the longer bit-lengths of RSA seem to actually contribute to post-quantum security, even though the motivation for moving from RSA-1024 was to protect against NFS and other classical attacks against shorter RSA instances.
Note that the resource estimation in the paper doesn’t account for Gidney’s speedup, which was 20x reduction in qubits required. It’s unclear whether that same improvement factor could be applied here; as the Gidney paper showed, the earliest CRQCs will probably be hardwired for certain circuits for performance reasons. E.g. Gidney’s circuit layout works for RSA-2048 and that’s it. But the ideas he presents around error correction (e.g. the yoked surface code) might apply more broadly, it’s hard to say.
Also note that many of his assumptions are based on a superconducting architecture, which generally have faster runtimes but lower stability (so scaling is harder)
Other architectures like this one https://arxiv.org/pdf/2506.20660 from the neutral atom community have slower runtimes but greater stability. But even if you scale, it probably only works for targeted, long-range attacks vs specific PKs as a CRQC.
Lots of variables to consider here in terms of estimating the timeline for a CRQC, but the proactive approach is probably the right one, because (to quote Gidney in his conclusion) we should “prefer security to not be contingent on progress being slow.”