compared to typical probability-based problems where you already have all the info available to you, this one involves changing probabilities based on the introduction of new info, ie the domain of Bayesian analysis. it is easy to get confused, with a naive interpretation thereof
the Monty Hall problem functions as a sort of Bayesian "mu", an epistemic shock, attempting to knock you out of intuitive complacency, ideally motivating you to re-evaluate your reasoning in other domains; if you got this one wrong, what else might you be?
---
RT @pee_zombie
it feels bad, being hit with a {mu, categorical error, not-even-wrong}, but digging into that discomfort is the path to a small enlightenment, a realization that yo…
https://twitter.com/pee_zombie/status/1328950758755151879
in this way, its comparable to Newcomb's paradox, which functions similarly, but for decision theory instead. when trying to understand a system, you learn the most from studying the ways it fails, its boundary conditions; your mind is no exception here.
---
RT @pee_zombie
the problem is effectively designed to capture intuitive reasoning, which typically fails to arrive at the presumably "correct" answer of 1box; it attempts to demon…
https://twitter.com/pee_zombie/status/1391406993211015171
not gunna pretend to truly understand Bayes' theorem, but the relevant idea is that to get the probabilities right, you need to correctly "update your priors" in response to the door(s) being opened, ie, your a priori estimation of each door's contents; we're often bad at this