The nice thing about reasoning from a model is that when something outrageous starts trending, you can ask if it makes sense under the model and reject it if it doesn't.

And most of the time you'll be right, because most of the time trending rumors are bogus.

But if something outrageous _actually does_ happen, you might not believe it because it doesn't fit your model.

You think you are justifiably ignoring bogus rumor. Everyone else thinks you are shutting your eyes to reality, or just crazy.

Follow

@WomanCorn Unfortunately AIUI the only fully robust way to handle this is Bayes on a Solomonoff prior, which is so uncomputable it's not even funny.
But I think you can get somewhere by not fixating on a _single_ model; instead, have an ensemble of models, one of which is the maxentropic "Something I have not thought of" model, and weighting them according to their predictive performance over time. (Make sure to *actually* predict and not just retrodict.)

@soundnfury multiple models does sound like a good option to dodge some of the risk.

The other bit that I _think_ helps is running the model against other data.

X happens. Model A predicts VWX. Model B predicts XYZ. If we see W, predict V. If we see Y predict Z.

This tends to make people very mad if they only have one model though.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one