Follow

I just understood the argument [against the orthogonality hypothesis](web.archive.org/web/2020070108). I’m not completely sold, but I’m interested.

It’s not entirely wrong; a superintelligent paperclip maximizer could exist. But terminal goals are not in practice independent from intelligence, because an agent pursuing Omohundro drives for their own sake may be able to self-improve more efficiently than a paperclip maximizer.

· · owlpost · 1 · 0 · 0

Moreover, during the training or evolution of a superintelligence, Omohundro drives would likely not only emerge but become intrinsically valued (à la mesa-optimizers), and override the original goal.

Notice that in nature, every terminal goal has always come about as a proxy for an Omohundro drive.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one