excellent thread on several core concepts of ML, namely encoding spaces for datasets, and how training approximates the latent manifolds thereof
these concepts are much more generally applicable than just for ML, for ex. modeling the state spaces of complex systems as well
---
RT @fchollet
A common beginner mistake is to misunderstand the meaning of the term "interpolation" in machine learning.
Let's take a look ⬇️⬇️⬇️
https://twitter.com/fchollet/status/1450524400227287040
the thread discusses how the encoding space is larger than the latent manifold, which is a subset of the space, but its worth mentioning that the actual state space of the system being modeled can be much larger than the encoding space!
---
RT @pee_zombie
this is why we praise those able to explain complex concepts simply; minimally-lossy dimensional reduction is a difficult skill to master! mapping btwn the super high-dimensional spa…
https://twitter.com/pee_zombie/status/1439992610781794305
in these cases, the latent manifold is a lossy projection from phenomena space to encoding space; much of the phenomena might be unrepresentable in the specific encoding space in use. this is why matching the encoding scheme to the system is critical!