> Another quick remark is about the > > " but in case it has not yet been said enough, in complexity > theory, you don't have to take any limits." > > > i so far understood that the O() notation describes instead an > *asymptotic* behavior. > > So i though the limit not only matters, but is actually at the core of > the notion.
Tow things: if I can get a formula for the cost, I'm done. I've got something much better than a big O class. Knuth does this all the time -- he gives exact formulae for some algorithms, not just the big O class these cost functions belong to.
Obviously people want to compare algorithms so complexity classes are vital, but you can determine the class with taking any limits. Asymptotic behaviour is very similar to limiting behaviour, but its not the same. This looks like splitting hairs (especially since the big O class can be defined in terms of limits) and I would not have mentioned it except that you said that the when probabilities are concerned, limits mean something else.
> We don't care what is happening at n=10, 1000, 1000000000, or 10! ^ > 10! > > this is describing an asymptotic behavior. That is why i though that > events with > limiting probability 0 should not affect it ... > > It this idea wrong ? If yes i am interested to correct my > understanding.
Yes, that's wrong (about limiting probabilities). You can determine the cost function and big O function class to which it belongs with no limits involved. You can use limits to get the complexity class if you like, but it's an argument purely about classes of function.
You did not answer my key question (maybe I forgot to ask it!). Do you agree that the cost function (in terms of compares) of your uniform splitting algorithm is C(n) = n? I can say that this is not in O(log(n)) (but it is in O(n)) without taking any limits, and without any reference to probability.