
Re: JSH: Upside down situation
Posted:
Mar 18, 2008 8:52 PM


On Mar 18, 7:26 pm, JSH <jst...@gmail.com> wrote: > On Mar 18, 6:23 am, Randy Poe <poespamt...@yahoo.com> wrote: > > > > > > > On Mar 18, 5:04 am, rossum <rossu...@coldmail.com> wrote: > > > > On Mon, 17 Mar 2008 17:13:54 0500, "Jane Jian" <dd...@yahoo.com> > > > wrote: > > > > >"rossum" <rossu...@coldmail.com> wrote in message > > > >news:mnkqt35mjkd5m2a5dq21j7c2pkbrj4s85t@4ax.com... > > > >> On Fri, 14 Mar 2008 17:06:50 0700 (PDT), JSH <jst...@gmail.com> > > > >> wrote: > > > > >>>I am currently working on my own practical implementation of what I > > > >>>call surrogate factoring that follows from the latest theory, and it > > > >>>is pleasingly fast, while I still have to work at getting it to factor > > > >>>really big numbers to my satisfaction. Oh, my stated goal is the > > > >>>ability to factor an RSA encryption sized number in under 10 minutes > > > >>>on a standard desktop. > > > >> Have you actually calculated how fast your current method would be on > > > >> a 500 bit RSA number James? > > > > >> I posted some timings: > > > > >> Average timings over 200 numbers: > > > >> 20 bits: 2.265 mSec average per number. 0 misfactors. > > > >> 22 bits: 10.310 mSec average per number. 0 misfactors. > > > >> 24 bits: 12.345 mSec average per number. 0 misfactors. > > > >> 26 bits: 50.155 mSec average per number. 0 misfactors. > > > >> 28 bits: 294.765 mSec average per number. 0 misfactors. > > > > >If you graph this and model it with b* a ^ (( Number of bits  J ) / 2) > > > >and minimize diferences; > > > > >the best values are b = 0.55 > > > >a = 8 > > > >J = 22 > > > > >For the smallest RSA 100 bits the model gives 9.14E+34 mSec. > > > >for RSA 500 it gives 3.6E+215 mSec > > > > >With a few more points, 29 and 30 bit, we can make a more accurate > > > >prediction. > > > > >But it looks like > > > > >1/2 * 8 ^ ((#bits  22)/2) > > > > >or > > > > >1/2 * 8 ^ 39 mSec for 100 bit RSA > > > > >and > > > > >1/2 * 8 ^ 239 mSec for the 500 bit RSA > > > > >I will let the reader figure how many years this is as an exercise. > > > > Thanks for that. I was hoping that someone would do a proper > > > curvefitting for the timings rather than the simple linear > > > extrapolation from the two extreme points that I did. In effect the > > > result is the same, James' method is an improvement on his normal > > > efforts but still has a long way to go if he wants to meet his goal of > > > factoring an RSA number in ten minutes. > > > > rossum > > > Unfortunately, all James heard from this was > > "faster than... Fermat's". So now he is convinced that > > you have proven he's "solved the factoring problem". > > > He has totally missed the fact that: > > (a) factoring is difficult, not practically solvable > > for RSA numbers, because it scales exponentially, and > > > (b) you found that the JSH "fast" method scales > > ... wait for it... > > > EXPONENTIALLY. > > >  Randy > > And he found wrong. I've talked about this before but unfortunately, > again, the sci.math newsgroup gets an idea in its head and just runs > wild with it despite repeated attempts at correcting the mis > information. > > rossum does not do the most important thing with this idea which is > WHY I say it's a solution to the factoring problem, with is search for > a key variable I call k, modulo p. What he said was he tried that and > he got "misfactors" so he quit and instead incremented k by 1. > > So he didn't do the theory, and his change would do, what? GASP. > It's make it appear to behave exponentially!!! > > And also I should remind that my code is faster than rossum as his > goes off a cliff doing the 1 increment thing before 40 bits I think, > while I can actually do 100 bits or more with small factors but don't > see the point. > > His approach goes off a cliff early because there are roughly k > searches incrementing by 1, where k is approximately sqrt(T/2), > usually as it can be a bit smaller. > > So for a 40 bit number he has about a 20 bit search, while my code can > factor a 40 bit number with 20 searches, though it doesn't always > which is about me solving some problems. So compare 20 bits for > searching to 20, and understand the difference searching modulo p > versus not. > > There are maybe a couple of practical problems to solve and surrogate > factoring handles arbitrarily large composites nearly flatly, as p > increases roughly as T increases, so they cancel each other out. > > Done rightly, the theory says it's a solution to the factoring > problem. > > Done stupidly as rossum does it, you're going to have to count roughly > sqrt(T/2), so you're done before 40 bits, like poor rossum with his > coding attempt, unlike my code which can go wild in ranges his can't > touch. > > But it doesn't do what I want so I'm still working at it. And I do > use a quirky approach as the code calls itself to factor surrogates > and does multiple checks with surrogates where, um, you could do just > one, but I like it! It's mine, and the approach helps me solve some > other problems... > > Oh, I'm around 40 bits now consistently. So don't get too excited > that the code can do 100 bits with small factors as I'm saying like > primes under 1000 or so, which you may say, hey, a brute force can do > that! Yeah, but how do you know when brute force can do that? > > I'm in a position now to evaluate claims of using surrogate factoring > with, oh wow!, my own code, and I'm the best in the world that's > talking about it loud and proud like this. Hmmm...I'm getting used to > being the best in the world. > > If not, someone needs to challenge me with surrogate factoring code.
When are you going to send me your code?
I need to translate it to Python in order to challenge you.
> You will not beat me, or if you do, I still celebrate and say, hah. > > James Harris

