I feel like more people should be concerned about how no interesting functions of the complexity prior normalise but maybe they are and I’m just ignoring them or maybe I’m missing something

“How many jelly beans are in the jar” well if your loss function is inverse-googological or harsher in the difference then for any number you can name you’d regret naming a bigger one because of your prior on very very large number of jelly beans

“What is the nature of reality” well if you use the SIA to update on anthropics no matter which Turing machine you guess is reality there’s an arbitrarily more probable one with a smaller prior probability

“What should I do if a mugger claims to be god” pay them

A leverage prior stops utilities from being unnormalisable, and maybe you only care about expected utility, because that’s what expected utility is, but leverage priors are weird and having a completely janked intermediate world model is weird so 🤔

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one