But if something outrageous _actually does_ happen, you might not believe it because it doesn't fit your model.
You think you are justifiably ignoring bogus rumor. Everyone else thinks you are shutting your eyes to reality, or just crazy.
@WomanCorn I suppose many people can be said to use model based reasoning. Some models are just better than others.
This is why it's hard to convince conspiracy theorists: their model rejects inputs that contradict the model.
@WomanCorn Unfortunately AIUI the only fully robust way to handle this is Bayes on a Solomonoff prior, which is so uncomputable it's not even funny.
But I think you can get somewhere by not fixating on a _single_ model; instead, have an ensemble of models, one of which is the maxentropic "Something I have not thought of" model, and weighting them according to their predictive performance over time. (Make sure to *actually* predict and not just retrodict.)
@soundnfury multiple models does sound like a good option to dodge some of the risk.
The other bit that I _think_ helps is running the model against other data.
X happens. Model A predicts VWX. Model B predicts XYZ. If we see W, predict V. If we see Y predict Z.
This tends to make people very mad if they only have one model though.
So you need a metamodel to decide when to throw out your model as corrupt. But this is an infinite regression.
(I have no solution to this problem.)