@Stoori looks like a heart valve
@cosmiccitizen oh my god I knew it
@duponin 🥴
Nah it's valid making precise predictions is super hard if you don't wanna get caught up in technicalities.
I continue to claim that while you mightn't admit it in 1½ years, you'll have been surprised by the rate of progress ;-)
@jarbus I would. Any concrete things you might want to bet on? E.g. a thing you predict no AI system will be able to do by the end of 2024.
re: anti-utilitarian screed
@genmaicha This was not meant to convince you of utilitariansm :-D
I think most things I like about utilitariansm are not very convincing to someone who is very religious.
My preferred framing removes the maximization aspect (since maximization is perilous) and tries to replace it with something like quantilization[1]. The part I like about utilitarianism is the focus on valence.
[1]: https://intelligence.org/files/QuantilizersSaferAlternative.pdf
anti-utilitarian screed
@genmaicha ecosystems or whatnot.
in that I agree with you that utilitarianism is profoundly unhuman—but it's not cream-colored liberalism or softened elite forces of blandness, but alien and horrifying and cursed. Failure? Sure. we play the St. Petersburg game[1].
But great men? Heterodoxy? No. Merely cogs. Cells interlinked. And suffering: however defined, that's going down (in expectation).
anti-utilitarian screed
@genmaicha
a black hole that is swallowing your universe.
but one might object that this is all uncomputable and infeasible (AIXI-tl notwithstanding). But even self-described real-world utilitarians are kind of crisp: you might not like their conclusions and strategies, but I bet that you also can't *predict* their strategies and conclusions like wanting to improve the lives of shrimp or caring about the subjective experiences of black holes or thinking about destroying
anti-utilitarian screed
@genmaicha …e.g. with [2](well, at least a hypercomputer). That requires you to define the function you optimize against, and I agree that this one is a huge crux (define, happiness, please…but let's not get into and hard problems here. maybe a bit of integrated information or symmetry[3]?) And given that function+solomonoff induction+cartesian argmax you're ready to go and…oh no. oh no. looks like you got a cancer
[2]: https://en.wikipedia.org/wiki/AIXI
[3]: https://opentheory.net/2021/07/a-primer-on-the-symmetry-theory-of-valence
anti-utilitarian screed
@genmaicha
*midnight poasting mode: activate*
I am not a utilitarian, though closer to it than ~all people, my current best guess is that it's something to steer towards but ultimately around. (I've found [1] especially enlightening in this context).
Strongly disagree on "utilitarianism is squishy and under-defined"—seems to me to be the most crisply defined ethical system, so clear that you can put it into a computer…
I operate by Crocker's rules[1].