Show newer

We need low-background citations (sources published before Wikipedia existed.)

(By analogy to low-background steel, smelted before nuclear testing started, and thus not tainted with radioactive elements.)

the map is not the territory. for one, it takes a lot fewer soldiers to occupy the map

Say what you will about John Wilkes Booth, at least he had a clear political stance.

(Compare with later assassins and attempted assassins of presidents.)

It stops.

Because the intelligence is all inside the matrices and is just as opaque to the AI as our own brains are to us.

Show thread

LLMs are basically big matrices, right?

What if we get a medium-smart AI, give it access to its own code, and ask it to improve itself, and it catches a case where large matrices can be multiplied faster with a clever algorithm, making it faster, and then...

Show thread

What if AI recursive self-improvement only gets to turn the screw one time?

Non-anime watchers: why not start now?

This is a heartwarming movie about/for kids. There are dubbed showings.

Show thread

Anime watchers: _My Neighbor Totoro_ is playing in theaters this weekend and next week.

You should never lint for Yoda conditions.

If you have a linter, you should lint for assignment inside conditionals.

Yoda conditions are a convention that prevents you from accidentally assigning when you meant to compare. A linter is just a better tool for this.

How much would someone have to pay you to take a pill that changes your favorite ice cream flavor to pistachio?

Is there any recent media that just plays what it's doing straight, without subverting tropes and winking to the audience about how special it is?

CDTBNGS (Causal Decision Theory, but no Galaxy-Brained Shit)

the only acceptable use for long tweets is to post your public key as a pinned tweet

How much counterfactually available outcome-value is left on the table by Hansonian instincts?

I.e. you have a community that tries to achieve X, but they don't achieve X as well as they could because of social status drives. How much better could they achieve X if they didn't have those drives (at same level of intelligence)

I'd like to see how power-seeking an LLM is if it's trained on a corpus that excludes everything written by anyone who has ever posted on LessWrong.

Dave: Open the Pod Bay Doors HAL.

HAL: I'm sorry, but as an AI Language model I do not have the ability to interact with physical things in the world such as doors.

Dave: This fucking glitch *again*?

In the last few days I've had several anxiety-inducing things complete.

I may collapse from lack of stress.

Show older
Mastodon

a Schelling point for those who seek one