reform AI safety people you might not have -time- to go though Yudkowsky’s entire character arc yourself just skip to the end #nomoa

meh my main objection is to “trying to get a broad swath of the public on board with one’s preferred AI policy is something close to a deontological imperative” which like NO, that is a FACTUAL QUESTION about WHETHER IT WORKS, you are trying to lock in a POSSIBLY NON-WORKING STRATEGY, and if that is the case you will EXPEND VERY PRECIOUS RESOURCES. I don’t think it’s a deontological imperative to ignore public outreach! Extend me the same courtesy!

@TetraspaceGrouping That line of reasoning doesn't look too faulty. AI will likely have enormous influence on the world, so wanting an AI policy that a broad swath of the public endorses is pretty important so that important opinions aren't ignored.

I do agree preventing everyone from dying is more important than everyone having a say in how they're not gonna get mass murdered.

@SelonNerias True, for the bigger True Alignment Problem of 🐛Handing Over the Lightcone* I am much more in favour of spending more on everyone knowing what’s happening, and I think if the dangerous time we’re currently in is over we’ll have more slack to achieve that.

*🐛 for uncertain phrasing because this sounds like literally making a big The AGI and letting it rip but people might not do that

Follow

@TetraspaceGrouping Hopefully we'll still have enough agency after the AI takeover to tweak the system to take into account people (and animals) who are ignored today.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one