Also nicely excludes some things that are Not My Job™, such as choosing which person to align the AI to or what to do about people who instruct AIs to do Bad Things™
Then I can say "that's *not my job*, sorry. talk to the policy people"
you might be thinking: “aha! so I should vote in elections, since even though under do()-calculus, the decision has a miniscule impact, there are many agents that are logically correlated with me, which means my influence is much higher!” A tiny problem is that the number of agents that are logically correlated because they base their decisions on logical correlation is, ah, not *that* big…
"Rhea Outwitting Saturn."
Edmé Bouchardon, CC0, via Wikimedia Commons. Color edits.
Apparently GPT-4 can't yet solve Bongard problems: https://twitter.com/programaths/status/1622324503182249985
I operate by Crocker's rules[1].