Vox and CNN say "the Pentagon".
Fox 5 DC says "officials", and "at the request of the Secret Service."
The Hill says "the defense department".
NPR uses the passive voice to avoid saying who.
I think model based reasoning is putting me ahead on average, but it's hard to know for sure.
And it's really hard to know if you're inside one of the instances where it's failing.
When Colin Powell says Iraq has weapons of mass destruction, I believe it, because it matches my understanding of how Saddam Hussein ran his country.
Oops.
In my defense, Al Gore came to the same conclusion using the same logic. (And he had every reason to argue against it.)
When Q says to Trust The Plan, I reject it because it looks more like fiction than any of the known secret activities of government insiders.
I think I'm solid on this one, but time will tell, I guess.
When people started talking about Trump conspiring with the Russians to hack the DNC, I rejected it because the talking point lined up too well with Clinton's campaign talking points.
That held up less well.
When the WHO said that masks don't work, I ignored them because my model said that masks should work, and also that official spokespeople would lie to control public behavior. (Buying masks.)
I think this held up relatively well.
So you need a metamodel to decide when to throw out your model as corrupt. But this is an infinite regression.
(I have no solution to this problem.)
But if something outrageous _actually does_ happen, you might not believe it because it doesn't fit your model.
You think you are justifiably ignoring bogus rumor. Everyone else thinks you are shutting your eyes to reality, or just crazy.
The nice thing about reasoning from a model is that when something outrageous starts trending, you can ask if it makes sense under the model and reject it if it doesn't.
And most of the time you'll be right, because most of the time trending rumors are bogus.