Show newer

France: The US is having a revolution! We should get some of that!

US: You mean the democracy we had the revolution to obtain, right?

France: Huh?

What is the chance that the Sun will supernova?

Well, our model of stellar lifecycles says it won't, so the chance of it happening is dominated by the chance that our model is wrong.

How likely is it that our model is wrong?

What's the best sequel euphemism for Vibecamp 2?

I recommend against wishing your enemies would die.

You do more damage to your soul by holding this opinion than you might think.

I think it's pretty clear that OpenAI is completely incompetent at safety, for any kind of safety you wish to use.

- Ordinary cybersecurity breaches
- Can't keep the AI from becoming waluigi
- Not a paperclip maximizer, only because it's a chatbot with no goals

Weirdos don't Stan murderers who appear at a glance to be part of your people challenge (impossible).

For everyone worried that the AI will teach people how to make bombs, I propose:

Anything in _The Anarchist's Cookbook_ does not need to be censored. It's too late, that knowledge is already out there.

Really starting to hope that OpenAI is deliberately pushing an unreliable product into production use to spark a new AI Winter.

Because if not, their safety focus is badly broken.

I now have rsync backing up my phone to my NAS automatically.

(Via the app syncopoli, but that's basically an rsync frontend and scheduler.)

We need low-background citations (sources published before Wikipedia existed.)

(By analogy to low-background steel, smelted before nuclear testing started, and thus not tainted with radioactive elements.)

the map is not the territory. for one, it takes a lot fewer soldiers to occupy the map

Say what you will about John Wilkes Booth, at least he had a clear political stance.

(Compare with later assassins and attempted assassins of presidents.)

It stops.

Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.

Show thread

LLMs are basically big matrices, right?

What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...

Show thread

What if AI recursive self-improvement only gets to turn the screw one time?

Non-anime watchers: why not start now?

This is a heartwarming movie about/for kids. There are dubbed showings.

Show thread
Show older
Mastodon

a Schelling point for those who seek one