Even with ML systems!
I agree that probably with most architectures, if you train them a lot to be capable alignment theorists, they have inner optimizers that are capable consequentialists, but the alignment-theorist-phase might be quite long (I could_{10%} see it going over 100x human ability).
If we had those widely distributed, people would likely use them for capabilities and just widen the gap (e.g. OpenAI who talk about this as a strategy are not to be trusted with that strategy, since I don't see them using it solely for alignment work for half a year, and instead using it on both capabilities and alignment. But their plan is sound in that regard).
But I disagree with the view that you can't have the alignment theorist that is not also a consequentialist.
Hm. I think the type of philosophy/math/cs needed for successful strawberry alignment is close enough to regular theorem-proving that AI systems that aren't seeds for worldcrunchers would still be very helpful.
(Doesn't feel to me like it touches the consequentialist core of cognition, a lot of philosophy is tree-traversal and finding inconsistent options, and math also feels like a MCTS-like thing)
Is the advantage we'd have by good alignment theorist ML systems 1.5x or 10x or 100x?
Hey @niconiconi did you write this: https://www.lesswrong.com/posts/mHqQxwKuzZS69CXX5/whole-brain-emulation-no-progress-on-c-elgans-after-10-years?
It's great
Man I do have a lot more respect for Oliver Habryka after listening to this[1]. Highlights include naming the thing where high status people eschew meritocracy because they can only lose, and the statement that there might be 5-10 years in the medium future that are about as crazy or crazier than 2020.
[1]: https://thefilancabinet.com/episodes/2023/02/05/6-oliver-habryka.html
Hm, I remember reading somewhere sometime a classification of ways that you can use unix programs in pipes:
Sources (<, cat, programs that just produce output), filters (removing data, such as wc), transformers (?) (such as sort, cut, awk) and sinks (>, programs that just execute). Anyone recollect where I could've gotten that from?
I'll take synthetic training data for $500, sam
I operate by Crocker's rules[1].