Show newer

Hm. I think the type of philosophy/math/cs needed for successful strawberry alignment is close enough to regular theorem-proving that AI systems that aren't seeds for worldcrunchers would still be very helpful.

(Doesn't feel to me like it touches the consequentialist core of cognition, a lot of philosophy is tree-traversal and finding inconsistent options, and math also feels like a MCTS-like thing)

Is the advantage we'd have by good alignment theorist ML systems 1.5x or 10x or 100x?

Telling my kidnappers about AI alignment until they gag me

Update: there's a bunch of women using the Replika thing.

I'd like to see the ratio

(95% confidence interval: [10%, 65%])

Man I do have a lot more respect for Oliver Habryka after listening to this[1]. Highlights include naming the thing where high status people eschew meritocracy because they can only lose, and the statement that there might be 5-10 years in the medium future that are about as crazy or crazier than 2020.

[1]: thefilancabinet.com/episodes/2

Hm, I remember reading somewhere sometime a classification of ways that you can use unix programs in pipes:

Sources (<, cat, programs that just produce output), filters (removing data, such as wc), transformers (?) (such as sort, cut, awk) and sinks (>, programs that just execute). Anyone recollect where I could've gotten that from?

people on the timeline are wrong

I have just the right thing

@w
Heh, for me it's "Everyone I like is trans?

A not-quite-child's guide to online discussion"

niplav boosted

Just learned set theory and I cannot contain myself.

*edit*
This post hit 500 boosts an 1k likes :D
Trans rights are human rights.
Bash the fash.

@Captain@octodon.social Wait until you find out people often gerrymander their definition of "powerful".

niplav boosted

If you rearrange the letters of POSTMEN, they become VERY ANGRY.

niplav boosted
Show older
Mastodon

a Schelling point for those who seek one