Show newer

For a normal-form game G and a player i, can removing actions from player i yield a better Nash equilibrium *for i*?

Has this been investigated?

Have you ever completely read a thing I've written on my site?

niplav boosted

Mistake I made: not focusing on/learning about interpretability. Seems so robustly good.

@jarbus

Nah it's valid making precise predictions is super hard if you don't wanna get caught up in technicalities.

I continue to claim that while you mightn't admit it in 1½ years, you'll have been surprised by the rate of progress ;-)

Evolutionary psychology predicts that people try to trick others in procreating with their close relatives much more often than they actually do?

@jarbus I would. Any concrete things you might want to bet on? E.g. a thing you predict no AI system will be able to do by the end of 2024.

"Forecasters didn't predict the pandemic or that financial crash"

They didn't have to. They would've predicted mostly base rates, which in both cases are sufficient information to prescribe drastic actions not taken by anyone.

Trembling hand equilibrium is a good concept.

The Nash equilibrium of "everyone has guns" is "everyone is nice", the trembling hand equilibrium of it is "everyone is usually nice but sometimes people shoot each other"

re: anti-utilitarian screed 

@genmaicha This was not meant to convince you of utilitariansm :-D

I think most things I like about utilitariansm are not very convincing to someone who is very religious.

My preferred framing removes the maximization aspect (since maximization is perilous) and tries to replace it with something like quantilization[1]. The part I like about utilitarianism is the focus on valence.

[1]: intelligence.org/files/Quantil

anti-utilitarian screed 

@genmaicha ecosystems or whatnot.

in that I agree with you that utilitarianism is profoundly unhuman—but it's not cream-colored liberalism or softened elite forces of blandness, but alien and horrifying and cursed. Failure? Sure. we play the St. Petersburg game[1].

But great men? Heterodoxy? No. Merely cogs. Cells interlinked. And suffering: however defined, that's going down (in expectation).

[4]: en.wikipedia.org/wiki/St._Pete

anti-utilitarian screed 

@genmaicha
a black hole that is swallowing your universe.

but one might object that this is all uncomputable and infeasible (AIXI-tl notwithstanding). But even self-described real-world utilitarians are kind of crisp: you might not like their conclusions and strategies, but I bet that you also can't *predict* their strategies and conclusions like wanting to improve the lives of shrimp or caring about the subjective experiences of black holes or thinking about destroying

anti-utilitarian screed 

@genmaicha …e.g. with [2](well, at least a hypercomputer). That requires you to define the function you optimize against, and I agree that this one is a huge crux (define, happiness, please…but let's not get into and hard problems here. maybe a bit of integrated information or symmetry[3]?) And given that function+solomonoff induction+cartesian argmax you're ready to go and…oh no. oh no. looks like you got a cancer

[2]: en.wikipedia.org/wiki/AIXI
[3]: opentheory.net/2021/07/a-prime

anti-utilitarian screed 

@genmaicha
*midnight poasting mode: activate*

I am not a utilitarian, though closer to it than ~all people, my current best guess is that it's something to steer towards but ultimately around. (I've found [1] especially enlightening in this context).
Strongly disagree on "utilitarianism is squishy and under-defined"—seems to me to be the most crisply defined ethical system, so clear that you can put it into a computer…

[1]: youtube.com

This all under continuous & kind slow & multipolar takeoff.

Show thread
Show older
Mastodon

a Schelling point for those who seek one