Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.num-analysis.independent

Topic: binary search with arbitrary or random split
Replies: 1   Last Post: Aug 7, 2012 8:22 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View  
Ben Bacarisse

Posts: 1,512
Registered: 7/4/07
Re: binary search with arbitrary or random split
Posted: Aug 7, 2012 8:22 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply


[Followups-to set since I don't think sci.math.num-analysis is topical
anymore.]

Ralph Hartley <hartley@aic.nrl.navy.mil> writes:

> On 07/24/2012 06:02 PM, Ben Bacarisse wrote:
>> What's going on here? If there were any down side at to using n/2 I can
>> see why one might consider alternatives -- even random ones (some people
>> do for quicksort, partitioning for example) --- but since n/2 is simple,
>> easy to calculate and optimal you can't be proposing an alternative.

>
> For searching a sorted list, I agree it doesn't make much sense, but
> consider sorting.
>
> Quicksort is a simple algorithm, and is close to O(n log(n)) for most
> inputs, but you can't use it when the input is chosen by an adversary
> because in worst case it is O(n^2).
>
> It is reasonable to use quicksort with each pivot point chosen
> randomly from a uniform distribution over the list. Unlike search,
> there is no consistently good choice, and any deterministic rule could
> be exploited to give O(n^2) cost *every* time.


That's not true. You can pick the median in O(n) time and that means
(if you're careful about a few other details) that you can have a
deterministic quicksort that is O(n log(n)) in the worst case. It's not
really practical, but that's not the point.

> Worst case is *still* O(n^2), but that's ok. What you really care
> about is that the average over your random choices is O(n log(n)),
> even for the worst possible input.


Depends on who "you" refers to! People interested in complexity theory
(this is comp.theory after all) are more likely to be interested in
worst-case analysis. This is not to deny the large body of theory on
probabilistic analysis -- it's just not the majority concern for
complexity theorists.

> I think you can actually say something slightly stronger than that,
> and show that the probability of very bad cases is small.


Yes, I think so too, but I don't know if there is an agreed measure for
"bad cases are sufficiently rare". The one paper cited in this thread,
proposed one, but I had trouble getting me head around it.

> It's OK to assume that your random choices follow a known
> distribution, you can make them so, but it can be a disaster to assume
> that the input array is anything but worse case.
>
> There are sorting algorithms with better worst case, but they are
> often not worth the trouble.
>
> None of this is *remotely* new.


No, but probabilistic measures of "almost always" O(f) are newer (if not
actually new). I'd be interested to hear from anyone who know what
recent thinking is about such things.

--
Ben.



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.