Great post, I especially found the analogy to the AI alignment problem convincing. And giving some credence to common-sense views. Discovered in this thread by @willmacaskill@twitter.com:
@foolip This is good. "EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA" is a very concise statement of the difficulty.