Show newer

Psst.

You can't actually unfollow people anymore.

I mean you can, but they still show up.

(Yeah, I'm not on "For you".)

I was trying to move certain accounts to a list and not my main timeline, but they keep appearing. Not RTs.

France: The US is having a revolution! We should get some of that!

US: You mean the democracy we had the revolution to obtain, right?

France: Huh?

@k4r1m an Intuition Pump is a thought experiment that helps you to understand similar situations.

So, the likelihood of the sun supernovaing is tiny, but if that happens it means I was seriously wrong about how the universe works.

I also think LLMs will not become AGIs. If one does, it means my model of how intelligence works is seriously wrong.

It would be nice if I could discover that in a non-catastropic way.

@k4r1m that's what I think too.

I'm looking for an intuition pump on how to reason about things when most of the weight leans on the model being right or wrong, not the specific facts of the matter.

What is the chance that the Sun will supernova?

Well, our model of stellar lifecycles says it won't, so the chance of it happening is dominated by the chance that our model is wrong.

How likely is it that our model is wrong?

@jec please feel free to substitute <the Platonic form of the person you are becoming> if you're not already using that definition for "soul".

What's the best sequel euphemism for Vibecamp 2?

I recommend against wishing your enemies would die.

You do more damage to your soul by holding this opinion than you might think.

I think it's pretty clear that OpenAI is completely incompetent at safety, for any kind of safety you wish to use.

- Ordinary cybersecurity breaches
- Can't keep the AI from becoming waluigi
- Not a paperclip maximizer, only because it's a chatbot with no goals

Weirdos don't Stan murderers who appear at a glance to be part of your people challenge (impossible).

For everyone worried that the AI will teach people how to make bombs, I propose:

Anything in _The Anarchist's Cookbook_ does not need to be censored. It's too late, that knowledge is already out there.

Really starting to hope that OpenAI is deliberately pushing an unreliable product into production use to spark a new AI Winter.

Because if not, their safety focus is badly broken.

I now have rsync backing up my phone to my NAS automatically.

(Via the app syncopoli, but that's basically an rsync frontend and scheduler.)

We need low-background citations (sources published before Wikipedia existed.)

(By analogy to low-background steel, smelted before nuclear testing started, and thus not tainted with radioactive elements.)

Show older
Mastodon

a Schelling point for those who seek one