@TetraspaceGrouping
Hm, true.
Per universal approximation theorem neural networks can approximate any function, but some functions are clearly easier to approximate than others
And the horribly discontinuous ones are probably very hard to approximate
Perhaps it's that K-Lipschitz continuous functions are easier to approximate for smaller K?
Okay new question which prior do neural networks with grad descent implement
@AlexMulkerrin and I, on the other hand, have to read the greg egan novels—i guess for every favorites list there is a to-read sublist out there
@WomanCorn @TetraspaceGrouping
Yep someone should probably do that
i wonder whether anyone will actually get this one
I operate by Crocker's rules[1].