Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: My sorting algorithm that counts in values in arrays, JTsort or Countsort
Replies: 18   Last Post: Jan 9, 2013 7:57 PM

 Messages: [ Previous | Next ]
 JT Posts: 1,448 Registered: 4/7/12
Re: My sorting algorithm that counts in values in arrays, JTsort or Countsort
Posted: Jan 9, 2013 12:00 AM

On 9 Jan, 05:38, forbisga...@gmail.com wrote:
> On Tuesday, January 8, 2013 5:39:43 PM UTC-8, JT wrote:
> > Did i say the pointer approach was implemented in -97 to solve *A HARD
> > PROBLEM*

>
> You passed off your algorithm as something new and generic.
>
> Associative memory is very expensive.
> if the hidden function dval converts an arbitrary string to
> an ordered weight more quickly than a comparison of the keys
> and the range of keys is dense within the set to be sorted
> then it could beat a quicksort.  This is like saying a bubble
> sort will beat a quicksort on an ordered set.  Consider
> this:
>
> In the mid '70s I worked for a thrift clearing house.  We
> handled about 40 savings and loans and thrift institution's
> banking needs in the northwest USA.  We had a hashing algorithm
> for each account number that indicated which sector of disk
> held an account's records.  The slow disk spun at 20milliseconds
> per rotation and we had head per track so average latency was
> 10 milliseconds.  The fast disk held 4 times as much disk and
> spun 4 times as fast so average latency was 2.5 milliseconds.
> We didn't need to do an index search to get the account record.
> When the clerk poked the button on the machine it sent a signal
> to our computer, we calculated the address, got the record, addded
> to or subtracted from the balance, put the record back, and sent
> the update to the machine so it would update the passbook.  Since
> the accounts were dense, that is assigned sequentially with no gaps,
> we could easily read them sequentially.  If we were asked to report
> in account balance order we would have had trouble since we didn't
> have computer memory or disk memory sufficient for such a task so
> would have to do a tape sort.  Yes, we were keeping two sets of
> transaction logs on tape and maintaining institution totals on disk
> but that doesn't really matter.  This is the kind of problem your
> algorith addresses, except you are just counting occurances and
> you only gave the calculated dval rather than the keys converted
> to them.  You chose a set with exactly 256(255?) keys and most
> occurring more than once.  Your specific claim was that it might
> be slower for less than 2000 items.  You didn't claim the dataset
> needed to be dense nor did you explain your weighting function
> and that it needed to be tuned so the keys were dense.  Instead
> you claimed no comparison was needed to do the sort.  Now I ask
> you how did you build your weights without comparing the keys?
>
> So, suppose I want to count instances of the words in the set
> {cat, dog, mouse, monkey} and report them back out as a list
> in alphanumeric order as you did and I want to do so as fast as
> I can.  OK I create an array I can index by the small int values
> 't','d','u',and 'n'.  I don't care about any entries other than
> these four so I don't initialize the array other than for these four
> entries and assign the index=value pairs 't'=1,'d'=2,'n'=3,'u'=4.
> Sure this table is sparse but who cares, it's only going to be used
> to assign the weight.  I don't need to look at the rest of the key
> when assigning the weight because I know it will be in one of these
> four and I can distinguish them from the 3 character.  So sure, I
> can beat quicksort because I don't have to compare "mo" to get to
> 'n' and 'u' and I'm not exchanging records.  I wouldn't really
> call what I'm doing "sorting".  As it turns out I would be faster
> just to initialize the 4 values and count in the sparse table then
> report back out using the ordered set of indexes {'t','d','n','u'}.
> As I said you can hash the key to build the index for the count.
> As long as you know the range of keys prior to the "sort" you can
> make the hash good enough to prevent multiple keys hashing to the
> same index.  All of this would need to be finely tuned to the datasets
> encountered.  It isn't generic.

If the array approach been posted to sci.math since at least 2003 it
can not be new, what is new is that i tell you i already implemented a
dynamic recursive algorithm using pointers in -97. I never posted it
here i would never had gotten any credit for it anyway, nor for
breaking RSA only an orange suite and a trip abroad.

Date Subject Author
1/3/13 JT
1/3/13 JT
1/3/13 JT
1/3/13 JT
1/4/13 JT
1/8/13 kiru.sengal@gmail.com
1/8/13 JT
1/8/13 kiru.sengal@gmail.com
1/8/13 JT
1/8/13 JT
1/8/13 JT
1/8/13 forbisgaryg@gmail.com
1/8/13 JT
1/8/13 kiru.sengal@gmail.com
1/8/13 JT
1/8/13 forbisgaryg@gmail.com
1/9/13 JT
1/9/13 kiru.sengal@gmail.com
1/9/13 kiru.sengal@gmail.com