excellent thread on several core concepts of ML, namely encoding spaces for datasets, and how training approximates the latent manifolds thereof

these concepts are much more generally applicable than just for ML, for ex. modeling the state spaces of complex systems as well
---
RT @fchollet
A common beginner mistake is to misunderstand the meaning of the term "interpolation" in machine learning.

Let's take a look ⬇️⬇️⬇️
twitter.com/fchollet/status/14

Follow

state spaces are representations of the possible states of some system; an encoding space, in the terminology of the QTd thread

these come in quite handy for understanding chaotic systems, among others

---
RT @pee_zombie
great illustration of the initial state phase space of a 2d chaotic system twitter.com/Rainmaker1973/stat
twitter.com/pee_zombie/status/

the thread discusses how the encoding space is larger than the latent manifold, which is a subset of the space, but its worth mentioning that the actual state space of the system being modeled can be much larger than the encoding space!

---
RT @pee_zombie
this is why we praise those able to explain complex concepts simply; minimally-lossy dimensional reduction is a difficult skill to master! mapping btwn the super high-dimensional spa…
twitter.com/pee_zombie/status/

in these cases, the latent manifold is a lossy projection from phenomena space to encoding space; much of the phenomena might be unrepresentable in the specific encoding space in use. this is why matching the encoding scheme to the system is critical!

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one