Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Interpreting normalized data as a bitstream
Replies: 1   Last Post: May 7, 2013 9:41 AM

 Messages: [ Previous | Next ]
 Matthew Posts: 46 Registered: 4/6/12
Interpreting normalized data as a bitstream
Posted: May 2, 2013 5:31 PM

When a binary bitstream (sequence of 0s and 1s) is passed through a channel, it is often distorted. Upon receiving the transmitted bitstream, how could one re-interpret it as a binary vector? I suppose this is also known as demodulation or bitstream synthesis, and I am aware of other sources of errors, such as jitter, but to keep things simple, I am not accounting for them. For example, after transmission, the vector 15-bit vector v = [0 1 1 1 0 0 1 0 1 1 0 1 0 0 0] might be received as a vector of 15,000 data points (depending upon the sampling rate) that lie between y=0 and y=1 (or between -1 and 1). The original pattern of binary bits would be discernible from plotting the received (distorted) vector: one can visually detect a peak of what must be three 1s, followed by a valley of what must be two 0s, etc. Note that one simply could not use the "round" function, since
peaks or valleys may no longer be found close to 1 or 0, respectively (for instance, what should be a '1', would be distorted to be y=0.3 or y=-0.7). I have looked into using the first and second derivatives (regions of increasing and decreasing and concavity), but this is not foolproof (it seems like such a simple problem!) Any suggestions? Thanks!

Date Subject Author
5/2/13 Matthew
5/7/13 Matthew