Is there a general rule of thumb for how close N's have to be in order to permissibly implement the t-test and correction algorithms you've specified ?
For example, suppose I have twelve sets of observations for our standard length intervals for two different dicodon groups, e.g S63 and S60.
Suppose further that for for S60, the twelve N's resulting from a given type of run (say ln(c/u) on ln(c/L) for u > 1) range from 60 to 120 with an average of 90, whereas for S63, the twelve N's range frrom 180 to 360 with an average of about 270.
Can these two sets be compared by your t-test and correction algorithms, or does some further adjustment/refinement have to be made due to the disparate size of the N's?
Thanks as always for any guidance/advice you can provide here. (As you can see, I'm trying to think of all the "gotcha" questions that may arise during your upcoming (and hopefully temporary) absence from this dialog.)