> On Jul 27, 1:29 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote: <snip> >> I think these examples make it clear why you need a new notation. You >> can't re-define O(f) with a probabilistic meaning without there being >> way too much scope for confusion. > > Alas, it looks like it has already invented since long time. So i > arrived late :-(
Except that the paper you cite does not seem to use the same definition that you were proposing. Of course, if you like this definition, go ahead and you their notation -- no need to proliferate symbols -- but you'll need a way to write it for Usenet posts as your quote below shows so clearly.
> Making a quick check, it looks like that (while someone here was > accusing me of "gibberish" :-)
It's possible that what you wrote was gibberish (I'm sorry but I don't recall). What's clear is that the paper you cite does not use the same reasoning you do. It uses what looks like a significantly different definition. In particular it does not rely on "almost sure" convergence. It's possible, of course, that their definition is that same as yours. It would make an interesting exercise to see if that is the case.
> using the big O in the probabilistic sense is the norm even in > official CS literature (SIAM) when > dealing with probabilistic formulations.
That's not a clear way to report it. The paper you cite uses big O as everyone else does. They define another relation (with it's own notation) that relies on probabilities. I'd need a few more references to know if either this new notation or the specific definition they use is anything like "the norm".
What is still not in doubt is that big O is used, even in the context probabilistic algorithms, to mean what it has always meant. The textbook I cited earlier does so, and so does the paper you've referenced here. (I'm sorry to keep banging on about this but you've worded the above remark in such a way that I felt it needed to be said again.)
Well, not "just the same"; "similar" is the best I can do right now. They don't use almost sure convergence and it's not clear how their definition relates to your algorithm. Have you been able to show that your uniform splitting search is OPr(log(n))? (I'm using my suggested OPr because the notation they use does not obviously translate to ASCII.)
> "Just like big-O function serves to represent the complexity bounds of > deterministic algorithms, we employ O to represent complexity bounds > of randomized algorithms. We say a randomized algorithm has resource > (like time, space, etc.) bound O(g(n)) if there is a constant c such > that the amount of resource used by the algorithm (on any input of > size n) is no more than c g(n) with probability 1 1=n for any > 1 > ... "
That went wrong. For everyone else (is anyone else reading this?) the paper uses O with a tilde over it for this probabilistic bound, and the definition is that
We say a randomized algorithm has resource [...] bound O(g(n)) if there is a constant c such that the amount of resource used by the algorithm (on any input of size n) is no more than c a g(n) with probability >= 1 - 1/(n^a) for any a > 1.
> (i found similar stuff in several other papers) Hmm, it's time i > begin to read some CS article too ... :-)
Do they all use this definition? Any agreement between them on the notation to be used?