Follow

Classical ("discrete") error correction is like: assume every bit you send has an independent probability p of being flipped. Figure out ways of encoding/decoding a message of n bits using n+k bits that minimizes the probability of an error. Where can I read about the ℝ-analog?

I'm thinking something like: A message is a tuple of real numbers. Each numbers gets an independent noise term of variance σ added to it. Figure out encoding/decoding that minimizes the expected squared distance between intended message and decoded message.

For example you can duplicate the message and take the average - this halves the expected squared distance. But you can probably do better with a more clever encoding scheme.

Shannon basically solved the discrete problem, and proved that you don't need very large k to get very small probability of error - for discrete signals, error-correction is basically a solved problem!

I wonder if this is true for analog signals too?

@ayegill there is some stuff about this in Shannon's paper (A Mathematical Theory of Communication) - see part IV.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one