Singularitarians desire to build a master AI to run the world. They realize that it's really hard to get right, and switch to arguing against it.

Normies don't desire to build a master, they make an AI that they can just not use when it does something wrong.

Follow

@WomanCorn
We maintain that the second thing is also hard[1], though maybe not as hard[2][3] (unsolved as of yet)

[1]: gwern.net/Tool-AI
[2]: arbital.com/p/corrigibility/
[3]: arbital.com/p/hard_corrigibili

@niplav I think that a lot of my discomfort with AI risk arguments stems from carrying through assumptions about sovereign AI into discussions about other kinds.

(Yes, I have heard about instrumental convergence.)

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one