Inspired by this really interesting post
Shannon basically solved the discrete problem, and proved that you don't need very large k to get very small probability of error - for discrete signals, error-correction is basically a solved problem!
I wonder if this is true for analog signals too?
For example you can duplicate the message and take the average - this halves the expected squared distance. But you can probably do better with a more clever encoding scheme.
I'm thinking something like: A message is a tuple of real numbers. Each numbers gets an independent noise term of variance σ added to it. Figure out encoding/decoding that minimizes the expected squared distance between intended message and decoded message.
It's the first time I've been skiing since I grew a beard - definitely an interesting experience 😂
Applied algebraic abstractologist. Trying to get the heavens into my head.
"Elsk – og berik med drøm – alt stort som var!
Gå mot det ukjente, fravrist det svar!"