Singularitarians desire to build a master AI to run the world. They realize that it's really hard to get right, and switch to arguing against it.
Normies don't desire to build a master, they make an AI that they can just not use when it does something wrong.
@WomanCorn We maintain that the second thing is also hard[1], though maybe not as hard[2][3] (unsolved as of yet)
[1]: https://www.gwern.net/Tool-AI[2]: https://arbital.com/p/corrigibility/[3]: https://arbital.com/p/hard_corrigibility/
@niplav I think that a lot of my discomfort with AI risk arguments stems from carrying through assumptions about sovereign AI into discussions about other kinds.
(Yes, I have heard about instrumental convergence.)
a Schelling point for those who seek one
@niplav I think that a lot of my discomfort with AI risk arguments stems from carrying through assumptions about sovereign AI into discussions about other kinds.
(Yes, I have heard about instrumental convergence.)