the problem is effectively designed to capture intuitive reasoning, which typically fails to arrive at the presumably "correct" answer of 1box; it attempts to demonstrate the utility of formal reasoning/decision theory in certain situations
(stole this variant from a🔒🐦)
@pee_zombie
in the real world if I was in that hypothetical situation, I would have more information at my disposal than the situation lets on. There are unknowns here that can't be answered that significantly impact my answer.
Never predicted incorrectly in the past? I would need to know how reliable that information I've heard is. Am I mistaken about the predictor's reliability and that is simply an assumption?
Once you start putting Bayesian probabilities on information, it gets messy.
@pee_zombie Its fascinating how quickly these thought experiments end up becoming a Keynesian p-beauty contest style situation with strong under currents of Schelling point theory.
Decomposing the problem down, it seems there is no correct answer as it seems to devolve into the 'unexpected hanging paradox'.
I managed to reason thru that paradox to get to godel's incompleteness theorem at one point, so I suspect this isn't actually solvable.
@pee_zombie These problems 'work' because they are ignoring fundamental aspects of reality around the speed-accuracy trade off. If you ignore the Physicist-philosopher's "frictionless vacuum" effect and throw entropy and information theory back into the equation, it stops being as much of a challenge to reason thru, and the idea that it's a trick question becomes more obvious.
https://twitter.com/ultimape/status/1045575215210729472
Convincing people to use one reasoning style over another is politics.
@pee_zombie The problem I see happening is that these thought experiments never express these assumptions. They are used as koans to change the way people think. At least in a physicists frictionless vacuum they make those assumptions about reality explicit.
If you don't take into account air friction, your models will give you the wrong answer. This seems to matter in different styles of reasoning, but we don't make that facet apparent (and so when done in the real world, we get wrong answers)
@pee_zombie The marshmallow test is the same.
We neglect that children are playing an iterative game, not a one-off. Ignoring the 'trust in authorities to not lie to you' makes the contrived marshmallow test fit our hypothetical frictionless vacuum.
So of course we get weird answers when we try to extrapolate from it.
This is Feynman's Cargo Cult science ideas in a nutshell.
Convincing someone that you are in a prisoner's dilemma when you are actually in a public goods game is meta-strats.
@pee_zombie That is to say, these thought experiments also don't exist in a vacuum either. I can't even begin to rationalize the word problem unless I also bring in context of the person creating it. The meaning changes dramatically depending on the form of those unstated assumptions.
@pee_zombie This way of thinking (adding speed/accuracy trade offs back into reasoning methods) is an antidote for nerd-sniping games.
The alternate way to frame these problems is as IQ-signaling and playbows among an ingroup.
🐶
imo the correct path of reasoning here is revealed by the question statement: that the Predictor has never been wrong. how could this be the case? the answer is telling: this could only be the case if they could simulate you perfectly, ie, use the exact same reasoning as you will