@niplav The closest thing to a simplicity prior is a regularization term in the loss function with a penalty for large weights.
Having a large range leads to a sort of ordinal hierarchy of floats with some things never being able to interact gain. So overfitting i.e. memorizing restricted cases i.e. higher complexity.