Found another good site: https://linas.org/
Also nicely excludes some things that are Not My Job™, such as choosing which person to align the AI to or what to do about people who instruct AIs to do Bad Things™
Then I can say "that's *not my job*, sorry. talk to the policy people"
you might be thinking: “aha! so I should vote in elections, since even though under do()-calculus, the decision has a miniscule impact, there are many agents that are logically correlated with me, which means my influence is much higher!” A tiny problem is that the number of agents that are logically correlated because they base their decisions on logical correlation is, ah, not *that* big…
I operate by Crocker's rules[1].