Show newer

@domenic hmm.

Maybe I should do more testing. I was under the impression that http on localhost wouldn't get me, say, a service worker.

I'm pretty sure there are APIs that are blocked on file: URLs for security reasons.

If your case rests on a Novel Interpretation of the law, you're gonna have a bad time.

Me: Boy, I'd like to test that new browser API. Let me spin up a small project on localhost.

WHATWG: Sorry, https only.

The assumption that Yudkowsky is calling for terrorism is predicated on the assumption that the only people who will listen to him are loons with no political power, thus his call for government action is actually a call for extra-governmental action.

What can you rely on GPT-4 for that you couldn't rely on GPT-3 for?

Anakin: I have an exotic decision theory that let's me one-box on Newcomb's Problem.

Padme: But allows you to make normal decisions otherwise, right?

Anakin:

Padme: But allows you to make normal decisions otherwise, right?

Psst.

You can't actually unfollow people anymore.

I mean you can, but they still show up.

(Yeah, I'm not on "For you".)

I was trying to move certain accounts to a list and not my main timeline, but they keep appearing. Not RTs.

France: The US is having a revolution! We should get some of that!

US: You mean the democracy we had the revolution to obtain, right?

France: Huh?

@k4r1m an Intuition Pump is a thought experiment that helps you to understand similar situations.

So, the likelihood of the sun supernovaing is tiny, but if that happens it means I was seriously wrong about how the universe works.

I also think LLMs will not become AGIs. If one does, it means my model of how intelligence works is seriously wrong.

It would be nice if I could discover that in a non-catastropic way.

@k4r1m that's what I think too.

I'm looking for an intuition pump on how to reason about things when most of the weight leans on the model being right or wrong, not the specific facts of the matter.

What is the chance that the Sun will supernova?

Well, our model of stellar lifecycles says it won't, so the chance of it happening is dominated by the chance that our model is wrong.

How likely is it that our model is wrong?

@jec please feel free to substitute <the Platonic form of the person you are becoming> if you're not already using that definition for "soul".

What's the best sequel euphemism for Vibecamp 2?

I recommend against wishing your enemies would die.

You do more damage to your soul by holding this opinion than you might think.

I think it's pretty clear that OpenAI is completely incompetent at safety, for any kind of safety you wish to use.

- Ordinary cybersecurity breaches
- Can't keep the AI from becoming waluigi
- Not a paperclip maximizer, only because it's a chatbot with no goals

Weirdos don't Stan murderers who appear at a glance to be part of your people challenge (impossible).

For everyone worried that the AI will teach people how to make bombs, I propose:

Anything in _The Anarchist's Cookbook_ does not need to be censored. It's too late, that knowledge is already out there.

Show older
Mastodon

a Schelling point for those who seek one