On Jul 27, 9:50 am, Martin Brown <|||newspam...@nezumi.demon.co.uk> wrote:
> For the purpose of the analysis we pick a given k and decide how the > algorithm would behave you can then average over all values of k later. > > If you want to force this pathetic approach to have something vaguely > like probabilistic O(logN) behaviour choose P(k) = 1/k and give or take > some convergence issues that I can't be bothered to work out it will be.
I said that the random variable is n/k ,
Is that really so hard to understand ? That makes a world of difference, in case you have not get it yet.
As to the probability space, i see 2 main approaches
- "conditional" (respect to the data array) - unconditional
In the first case, you would study the problem considering as random variable n/k alone in the second case you would study the problem assuming the array x also to have some join distribution.
- P ( s | x ) - P ( s , x )
they are both meaningful and interesting.
( Other interesting formalizations, may be considering fuzzy data instead of crisp, and even probabilistic fuzzy. Clearly one would introduce membership functions. )