On Tue, 01 Aug 2000 22:18:42 +0200, Mok-Kong Shen <firstname.lastname@example.org> wrote, in part:
>Do I understand correctly that you want to transform a random bit >sequence (almost uniformy distributed) into a sequence of Morse >code symbols and study the optimal transformation?
I'm assuming I have a bit sequence that is precisely uniformly distributed, and my goal is to find the optimal transformation of that sequence - within limits to the complexity of the transformation used - into Morse code.
>I admit that >I haven't fully understood your article, hence the question: Do >you keep the set of the original Morse code symbols or do you >consider possible extensions, i.e. introducing more symbols?
I do keep the original Morse code symbols. I note that if I introduced more symbols, even symbols from the original Morse code, such as those for digits, this would result in changes to the transformation.
Normally, in constructing a Huffman code for the English alphabet, for example, one has probabilities for symbols, and then uses those probabilities to construct symbols. Supposing one already has a fixed set of symbols of different lengths: can one go in reverse, and determine how to use those symbols optimally?
That is the question I address, and I find that I have to determine the potential efficiency of a code using those symbols before I can compute the probability for each symbol that would make the code optimal. This requires solving a fancy polynomial equation. And this also means I have to take into account per-symbol overhead (as opposed to per-bandwidth-unit or per-bit overhead, which being constant can be ignored). Then I can construct a Huffman code based on those probabilities normally, although I am using it in reverse to express an input in bits as an output in symbols of differing probability.
I just don't remember seeing this kind of thing done before, and so it seems an illustration of the math involved in finding optimal codes of a different kind.