Show newer

Say what you will about John Wilkes Booth, at least he had a clear political stance.

(Compare with later assassins and attempted assassins of presidents.)

It stops.

Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.

Show thread

LLMs are basically big matrices, right?

What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...

Show thread

What if AI recursive self-improvement only gets to turn the screw one time?

Non-anime watchers: why not start now?

This is a heartwarming movie about/for kids. There are dubbed showings.

Show thread

Anime watchers: _My Neighbor Totoro_ is playing in theaters this weekend and next week.

You should never lint for Yoda conditions.

If you have a linter, you should lint for assignment inside conditionals.

Yoda conditions are a convention that prevents you from accidentally assigning when you meant to compare. A linter is just a better tool for this.

How much would someone have to pay you to take a pill that changes your favorite ice cream flavor to pistachio?

Is there any recent media that just plays what it's doing straight, without subverting tropes and winking to the audience about how special it is?

@niplav I'd guess: the more of what they're doing is conversational, the more it gets absorbed into status games. The more it is physical, the less.

(In this model, publishing academic papers is conversational.)

If you are building houses or doing work in a lab there's less space to posture. If you're just talking, there's more.

@cosmiccitizen Hatred is bad for you.

But then, I can join you in the Bodhisattva except for my one enemy club.

CDTBNGS (Causal Decision Theory, but no Galaxy-Brained Shit)

the only acceptable use for long tweets is to post your public key as a pinned tweet

@niplav this is a really good question that I can't even begin to conceptualize how to estimate the answer.

How much counterfactually available outcome-value is left on the table by Hansonian instincts?

I.e. you have a community that tries to achieve X, but they don't achieve X as well as they could because of social status drives. How much better could they achieve X if they didn't have those drives (at same level of intelligence)

@schweeds lol, yeah.

Worse than that though, if the LLM's behavior is generalized from examples, LessWrong is a hotbed of bad examples you wouldn't want your AI to learn from.

I'd like to see how power-seeking an LLM is if it's trained on a corpus that excludes everything written by anyone who has ever posted on LessWrong.

Dave: Open the Pod Bay Doors HAL.

HAL: I'm sorry, but as an AI Language model I do not have the ability to interact with physical things in the world such as doors.

Dave: This fucking glitch *again*?

@eniko I have not had a headache, but I've had long and deep sleep the past few days. Probably making up for sleep debt, but starting to drift off in the early evening is throwing me for a loop.

Show older
Mastodon

a Schelling point for those who seek one