Singularitarians desire to build a master AI to run the world. They realize that it's really hard to get right, and switch to arguing against it.
Normies don't desire to build a master, they make an AI that they can just not use when it does something wrong.
The problem is that we already have a master machine to run the world (liberal democracy) and it's doing a shit job.
@WomanCorn and world's best dads desire to build 3 that vote against each other and carry different aspects of their creator
@WomanCorn
We maintain that the second thing is also hard[1], though maybe not as hard[2][3] (unsolved as of yet)
[1]: https://www.gwern.net/Tool-AI
[2]: https://arbital.com/p/corrigibility/
[3]: https://arbital.com/p/hard_corrigibility/
@niplav I think that a lot of my discomfort with AI risk arguments stems from carrying through assumptions about sovereign AI into discussions about other kinds.
(Yes, I have heard about instrumental convergence.)
Basically, the LessWrong arguments are not always careful to distinguish between Seed AIs, Superintelligent AIs, Sovereign AIs, Perfect Bayesian Reasoners, Optimizers, Optimized AIs, and probably several other kinds.
I expect the arguments are full of type errors.