Given two n-bit computer words, each containing bits that are not perfectly random (i.e. the entropy is less than n bits), I like to find experimentally some ways of processing to combine them into one single n-bit word that is better in the sense of having higher entropy. What could be a reasonable and practically not difficult to compute good measure of the goodness of a procedure of mixing such pairs of words?
Perhaps I may say what I myself have thought up till now about the issue: Let z = f(x,y) mod 2**n be the result of mixing x and y. Let a bit of x be flipped to become xr, compute zr = f(xr,y) and then c = z xor zr. bc = bitcount(c) gives then the "avalanche" for that bit, i.e. the number of bits of z that get changed as consequence of the bit flipping of x. Thus a good f() should provide a good (large, near n/2) mean value of the mean value (over all n bits of x that get flipped) of bc as well as a good (small) standard deviation. Could this thinking of mine be correct?