Hm. I think the type of philosophy/math/cs needed for successful strawberry alignment is close enough to regular theorem-proving that AI systems that aren't seeds for worldcrunchers would still be very helpful.
(Doesn't feel to me like it touches the consequentialist core of cognition, a lot of philosophy is tree-traversal and finding inconsistent options, and math also feels like a MCTS-like thing)
Is the advantage we'd have by good alignment theorist ML systems 1.5x or 10x or 100x?
@wolf480pl @pseudoriemann The EU really be RETVRN
Hey @niconiconi did you write this: https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years?
It's great
Man I do have a lot more respect for Oliver Habryka after listening to this[1]. Highlights include naming the thing where high status people eschew meritocracy because they can only lose, and the statement that there might be 5-10 years in the medium future that are about as crazy or crazier than 2020.
[1]: https://thefilancabinet.com/episodes/2023/02/05/6-oliver-habryka.html
Hm, I remember reading somewhere sometime a classification of ways that you can use unix programs in pipes:
Sources (<, cat, programs that just produce output), filters (removing data, such as wc), transformers (?) (such as sort, cut, awk) and sinks (>, programs that just execute). Anyone recollect where I could've gotten that from?
@thatguyoverthere good times
@w
Heh, for me it's "Everyone I like is trans?
A not-quite-child's guide to online discussion"
@chjara hey
I'll take synthetic training data for $500, sam
I operate by Crocker's rules[1].