Hey guys,
Here are links to the different bit sizes I’ve done with this group phase method:
3-Bit version:
3-Bit version w/ Bloch Clock:
3-Bit version w/ Bloch Clock Chain:
4-Bit version:
Experiment 72 - Qwork
5-Bit version:
Experiment 73 - Qwork
5-Bit arXiv link: [2507.10592] Breaking a 5-Bit Elliptic Curve Key using a 133-Qubit Quantum Computer
6-Bit version:
A normal Shor’s factorization oracle is built around modular exponentiation, a^x mod N, which produces a 1-D periodic interference pattern. The phase-grouping method I’ve been exploring instead encodes a phase relationship of the form (a + bk), creating a 2-D interference ridge rather than a single periodic stripe, and the base circuit size scales as 3n qubits (as shown in the 3-bit through 6-bit runs above). In theory, with 16k shots on the 6-bit version, a perfect machine would distribute all those results across the 64 correct k ridge points. To break the key, the goal is to see a few statistically significant peaks of that ridge above the noise floor. A hacker might collect the whole ridge, while a computer scientist might test only the top states. An easy way to think of this is that you take the k variable in the equation Q = kP and embed it into a new relation (a + bk), with a and b superposed from 0 to 63, using phase entanglement. This creates an entire interference ridge of that relation, which you then push above the noise floor.
With this method, like Shor’s, the gate depth still blows up with larger keys, so time and error become limiting factors. But because this method uses the geometry of an entire interference ridge, my sense is that as hardware fidelity improves, the ridge will sharpen and become easier to see through noise. At large key sizes, the classical compute per oracle (per key/instance) seems manageable (order 10^6 - 10^7 cycles per oracle), which is well within GPU farm range, the real limiting factor is the time the oracle starts taking per run when gate depth blows up.
I’m going to go through two scenarios, two extremes, one in the normal realm of needing thousands or millions of qubits for deep error correction and one that’s focused on hardware improvements alone because they both help in understanding how this variant scales on the two sides.
First, applying the most accepted current future error correction needs (in the thousands to millions of qubits range) to this method, works well and quite like the original Shor’s variant, as it uses the same quantum mechanical principles as the original Shor’s. This approach uses phase entanglement, QFTS, and an interference ridge. And in that case, as we get better error correction with more qubits, the interference ridge will get clearer and more precise.
Very speculatively (sci-fi now), if you extrapolate this method directly (using 3n and not changing it from the runs above), a future 256-bit copy would use 3*256 = 768 qubits (without additional error correction), to get a few peaks of the (a + bk) interference ridge through the noise, but *only if hardware alone reached extremely low error (~10^-10) and coherence times (~10 - 100 ms). We are at ~10^-4 and ~300 microseconds right now (IBM Pittsburgh). That’s obviously a sci-fi ‘if’ now, and hardware progress would have to have many breakthroughs, but it helps understand the base level variant.
Thus, this method will improve with the help of both the addition of error correcting qubits and hardware advances, and my guess is that it will be some combination of both.
I’m currently working on a paper to lay out the requirements for this variant more carefully and mathematically. In the meantime, I’d love feedback on whether this perspective makes sense, and whether others see potential (or fundamental blockers) in this interference ridge-based approach.