On Jul 26, 3:55 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> You did not answer my key question (maybe I forgot to ask it!). Do you > agree that the cost function (in terms of compares) of your uniform > splitting algorithm is C(n) = n? I can say that this is not in > O(log(n)) (but it is in O(n)) without taking any limits, and without any > reference to probability.
Sure it is. You have understood perfectly my point, though.
Considering the formulation with random splitting point n/k ( not n- k ! ) (again: "divide", not subtract!)
IF the only cases where i get cost n are those "degenerate", where by mere (insignificant, for n large) chance the binary search happens to be a linear search, i would still object that in a probabilistic setting, using sure convergence has little meaning.
So considering that O() is defined using limits, as far i can see everywhere, *in a probabilistic setting * (such is what we are considering here) i would consider *probabilistic forms* of convergence.
You say that this is not the way this is done in CS and they always consider "sure" convergence. I can and have to accept that for sure. But i also feel to say it makes no sense, in a probabilistic setting, nor intuitively.
I have in mind large problem, like n bigger than 5000 digits. In this case i only care of asymptotics and to me if the cost n reduces to O(logn) asymptotically, because the events causing cost n are practically impossible, i think it's right to see it as O(logn), no matter what happens for small n.
Clearly, IF there are other situations which causes the cost n (apart the linear searches and having a probability which does not tend to 0 then ok, i could not, obviously, object anything).
> O(log(n)) (but it is in O(n)) without taking any limits
This part is not completely clear to me yet. I always understood O() as a short hand for an event involving a limit. A description of an asymptotic behavior. But maybe you are referring to complexity classes definition and they don't use limits (?).