Show newer

In a little while, LLM may enable "mute this discourse" functionality where you describe the particular bullshit you don't want to see and it filters out more precisely what you want than filtering by keywords or whatever.

There is still alpha in doing weird things and writing down what happens

Show thread

Getting into fights about libertarianism online when you should be focused on getting good sleep bro you're worried about the wrong nap.

So voice assistants are about to become really good, right?

We must imagine Sisyphus unbothered. Moisturized. In his lane. Focused. Flourishing.

@niplav Yeah if we end up with something like diamondoid, probably they won't be very vulnerable to biological attackers

I wonder if creating nanomachines on Earth is going to be made even more difficult because the environment is already absolutely lousy with hostile autonomous nanorobots (bacteria)

Happy new year everyone!

May you succeed in 2023 where you failed in 2022!

@jonmsterling This looks incredibly cool! Is this tool available for other people to use?

Shannon basically solved the discrete problem, and proved that you don't need very large k to get very small probability of error - for discrete signals, error-correction is basically a solved problem!

I wonder if this is true for analog signals too?

Show thread

For example you can duplicate the message and take the average - this halves the expected squared distance. But you can probably do better with a more clever encoding scheme.

Show thread

I'm thinking something like: A message is a tuple of real numbers. Each numbers gets an independent noise term of variance σ added to it. Figure out encoding/decoding that minimizes the expected squared distance between intended message and decoded message.

Show thread
Show older
Mastodon

a Schelling point for those who seek one