excellent thread on several core concepts of ML, namely encoding spaces for datasets, and how training approximates the latent manifolds thereof
these concepts are much more generally applicable than just for ML, for ex. modeling the state spaces of complex systems as well
---
RT @fchollet
A common beginner mistake is to misunderstand the meaning of the term "interpolation" in machine learning.
Let's take a look ⬇️⬇️⬇️
https://twitter.com/fchollet/status/1450524400227287040
state spaces are representations of the possible states of some system; an encoding space, in the terminology of the QTd thread
these come in quite handy for understanding chaotic systems, among others
---
RT @pee_zombie
great illustration of the initial state phase space of a 2d chaotic system https://twitter.com/Rainmaker1973/status/1342037328836128771
https://twitter.com/pee_zombie/status/1342223630403514374
in these cases, the latent manifold is a lossy projection from phenomena space to encoding space; much of the phenomena might be unrepresentable in the specific encoding space in use. this is why matching the encoding scheme to the system is critical!