I have already asked on math.stackexchange, but hope someone here will understand what I am trying to ask ( I know I have worded it badly).
Supposing that n were known to have two prime factors, and that the computer had a database of all the primes <?n. Then, unless n is square, one factor would be <?n. If an algorithm were to reduce the number of possible primes <?n which could be a factor of n by 1/2 with each step, then the worst-case-scenario computing time is minimised. As n?? the number of primes to check approaches 2?n /(ln?ln2). Hence, if each step were to eliminate half the possibilities, the number of steps taken to factorise n would tend towards log_2(2?n / (ln(n)?ln(2))).
In reality there is probably no algorithm that can eliminate half the possible factors in each step, so how much can this lower bound be improved, still assuming an infinite database of primes?