Show newer

Alice: I found a link to something called "Sneer Club" that shows how cringe this guy is.

Bob: If you find yourself fighting by checking Sneer Club for cringe, you're NGMI.

If your case rests on a Novel Interpretation of the law, you're gonna have a bad time.

Me: Boy, I'd like to test that new browser API. Let me spin up a small project on localhost.

WHATWG: Sorry, https only.

The assumption that Yudkowsky is calling for terrorism is predicated on the assumption that the only people who will listen to him are loons with no political power, thus his call for government action is actually a call for extra-governmental action.

What can you rely on GPT-4 for that you couldn't rely on GPT-3 for?

My wife says that any time someone proposes doing anything with an ML model, you should replace “AI” in the proposal with “trained weasels” and if it still sounds like a good idea you can go ahead with it.

Anakin: I have an exotic decision theory that let's me one-box on Newcomb's Problem.

Padme: But allows you to make normal decisions otherwise, right?

Anakin:

Padme: But allows you to make normal decisions otherwise, right?

Psst.

You can't actually unfollow people anymore.

I mean you can, but they still show up.

(Yeah, I'm not on "For you".)

I was trying to move certain accounts to a list and not my main timeline, but they keep appearing. Not RTs.

France: The US is having a revolution! We should get some of that!

US: You mean the democracy we had the revolution to obtain, right?

France: Huh?

What is the chance that the Sun will supernova?

Well, our model of stellar lifecycles says it won't, so the chance of it happening is dominated by the chance that our model is wrong.

How likely is it that our model is wrong?

What's the best sequel euphemism for Vibecamp 2?

I recommend against wishing your enemies would die.

You do more damage to your soul by holding this opinion than you might think.

I think it's pretty clear that OpenAI is completely incompetent at safety, for any kind of safety you wish to use.

- Ordinary cybersecurity breaches
- Can't keep the AI from becoming waluigi
- Not a paperclip maximizer, only because it's a chatbot with no goals

Weirdos don't Stan murderers who appear at a glance to be part of your people challenge (impossible).

For everyone worried that the AI will teach people how to make bombs, I propose:

Anything in _The Anarchist's Cookbook_ does not need to be censored. It's too late, that knowledge is already out there.

Really starting to hope that OpenAI is deliberately pushing an unreliable product into production use to spark a new AI Winter.

Because if not, their safety focus is badly broken.

I now have rsync backing up my phone to my NAS automatically.

(Via the app syncopoli, but that's basically an rsync frontend and scheduler.)

Show older
Mastodon

a Schelling point for those who seek one