@genmaicha
Knowing the background of this, I'd bet at very generous odds that these people ideally would want insect factory farming *not* to happen.
It's more a "if you're gonna eat the bugs let them at least not suffer maximally" type of nonprofit
anti-utilitarian screed
@niplav I agree with the idea that eating bugs is technically not their goal, because it looks like these people are some species of utilitarian. The problem with utilitarianism, though, is that its idea of happiness is completely hypothetical; it can be a smokescreen for introducing all sorts of policies that probably aren’t the most optimific, but are simply more convenient for stakeholders and/or the government: hence why utilitarians often do argue for things like eating insects or socialism. There are an infinite number of ways to “optimize happiness”, and the choice of a particular one is determined by elite interests–so it’s disingenuous for utilitarians to claim that their proposed solutions are in any way objective or empirical.
Also, subordinating the needs of the individual to an imaginary, impossible metric of “net happiness” is insect-like in the sense that it tends to oppose the very extremes that are an important part of the human experience: failure, suffering, great men of history, heterodoxy, etc.
“Don’t want to eat the bugs? Well, everyone else does, and it’s the best available way of increasing net happiness–I have scientific studies to back me up. So enjoy your cockroach milk.”
anti-utilitarian screed
@genmaicha
*midnight poasting mode: activate*
I am not a utilitarian, though closer to it than ~all people, my current best guess is that it's something to steer towards but ultimately around. (I've found [1] especially enlightening in this context).
Strongly disagree on "utilitarianism is squishy and under-defined"—seems to me to be the most crisply defined ethical system, so clear that you can put it into a computer…
anti-utilitarian screed
@genmaicha …e.g. with [2](well, at least a hypercomputer). That requires you to define the function you optimize against, and I agree that this one is a huge crux (define, happiness, please…but let's not get into and hard problems here. maybe a bit of integrated information or symmetry[3]?) And given that function+solomonoff induction+cartesian argmax you're ready to go and…oh no. oh no. looks like you got a cancer
[2]: https://en.wikipedia.org/wiki/AIXI
[3]: https://opentheory.net/2021/07/a-primer-on-the-symmetry-theory-of-valence
anti-utilitarian screed
@genmaicha
a black hole that is swallowing your universe.
but one might object that this is all uncomputable and infeasible (AIXI-tl notwithstanding). But even self-described real-world utilitarians are kind of crisp: you might not like their conclusions and strategies, but I bet that you also can't *predict* their strategies and conclusions like wanting to improve the lives of shrimp or caring about the subjective experiences of black holes or thinking about destroying
anti-utilitarian screed
@genmaicha ecosystems or whatnot.
in that I agree with you that utilitarianism is profoundly unhuman—but it's not cream-colored liberalism or softened elite forces of blandness, but alien and horrifying and cursed. Failure? Sure. we play the St. Petersburg game[1].
But great men? Heterodoxy? No. Merely cogs. Cells interlinked. And suffering: however defined, that's going down (in expectation).
re: anti-utilitarian screed
@genmaicha This was not meant to convince you of utilitariansm :-D
I think most things I like about utilitariansm are not very convincing to someone who is very religious.
My preferred framing removes the maximization aspect (since maximization is perilous) and tries to replace it with something like quantilization[1]. The part I like about utilitarianism is the focus on valence.
[1]: https://intelligence.org/files/QuantilizersSaferAlternative.pdf