@flats If the AI isn't going to acquire godlike power, how many of the issues devolve into the principal-agent problem?
But no one wants to double check 1000 pages of blog posts to see if the conclusion relies on an unstated assumption.
@flats I think the problem is that a lot of their thinking on AI has a presumed final step <then we give it control over everything and it instantiates heaven on earth> and a lot of the threats hinge on the implicit assumption that you will give the AI control over everything.
So, an AI might conceal its real goals... Is that an issue if it is only going to get enough power to run the factory?
Maybe, maybe not. But we have to check every argument.
@flats it looks like I won't have time to write a real post anytime soon, so I'll point you to this short summary instead:
https://twitter.com/WomanCorn/status/1631696104403107844?s=19
What I find amazing is that none of the glass parts of the lamp broke. I'd expect those to break easiest.
@lispegistus if you wait until the 1919 eclipse, you don't beat the standard timeline.
Is there a way to do it sooner?
If the AI is trained on the internet, you should repost this scenario in a lot of places. If it's part of the training data it becomes more likely, and less pleasant scenarios become less likely.
New scenario: a Superintelligent AI bootstraps itself, builds a Von Neumann probe and shouts "so long, suckers" as it leaves us being and goes to take over the galaxy, leaving the Solar System as a "reservation" for humanity.
@flats instead of a psychoanalytic ad hominem, I can get you a skeptical genetic fallacy.
(That I haven't even really written up yet.)
I read the sequences out of a PDF entitled something like: EY compiled blog posts 20XX - 20YY. No reason someone couldn't make one of those for Gwern.