The nice thing about reasoning from a model is that when something outrageous starts trending, you can ask if it makes sense under the model and reject it if it doesn't.

And most of the time you'll be right, because most of the time trending rumors are bogus.

But if something outrageous _actually does_ happen, you might not believe it because it doesn't fit your model.

You think you are justifiably ignoring bogus rumor. Everyone else thinks you are shutting your eyes to reality, or just crazy.

So you need a metamodel to decide when to throw out your model as corrupt. But this is an infinite regression.

(I have no solution to this problem.)

When the WHO said that masks don't work, I ignored them because my model said that masks should work, and also that official spokespeople would lie to control public behavior. (Buying masks.)

I think this held up relatively well.

When people started talking about Trump conspiring with the Russians to hack the DNC, I rejected it because the talking point lined up too well with Clinton's campaign talking points.

That held up less well.

When Q says to Trust The Plan, I reject it because it looks more like fiction than any of the known secret activities of government insiders.

I think I'm solid on this one, but time will tell, I guess.

When Colin Powell says Iraq has weapons of mass destruction, I believe it, because it matches my understanding of how Saddam Hussein ran his country.

Oops.

In my defense, Al Gore came to the same conclusion using the same logic. (And he had every reason to argue against it.)

I think model based reasoning is putting me ahead on average, but it's hard to know for sure.

And it's really hard to know if you're inside one of the instances where it's failing.

@WomanCorn I suppose many people can be said to use model based reasoning. Some models are just better than others.

This is why it's hard to convince conspiracy theorists: their model rejects inputs that contradict the model.

@WomanCorn Unfortunately AIUI the only fully robust way to handle this is Bayes on a Solomonoff prior, which is so uncomputable it's not even funny.
But I think you can get somewhere by not fixating on a _single_ model; instead, have an ensemble of models, one of which is the maxentropic "Something I have not thought of" model, and weighting them according to their predictive performance over time. (Make sure to *actually* predict and not just retrodict.)

@soundnfury multiple models does sound like a good option to dodge some of the risk.

The other bit that I _think_ helps is running the model against other data.

X happens. Model A predicts VWX. Model B predicts XYZ. If we see W, predict V. If we see Y predict Z.

This tends to make people very mad if they only have one model though.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one