it would be helpful if we knew whether for almost all random functions from ℝⁿ→ℝ, changing any one element of the domain slightly changes the output a lot

then one might also prove (or disprove) the same thing for functions implementable by some classes neural networks

Follow

is this just reinventing the powerseeking theorems? gotta check

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one