EA is about maximization, and maximization is perilous by Holden Karnofsky

forum.effectivealtruism.org/po

Great post, I especially found the analogy to the AI alignment problem convincing. And giving some credence to common-sense views. Discovered in this thread by @willmacaskill@twitter.com:

twitter.com/willmacaskill/stat

Will has been silent since, wondering if he’s okay.

Follow

@foolip This is good. "EA is about maximizing a property of the world that we’re conceptually confused about, can’t reliably define or measure, and have massive disagreements about even within EA" is a very concise statement of the difficulty.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one