@WomanCorn
Especially if we get good mechanistic interpretability, there'd be some nice boundary conditions to use during training ("oh, this model clearly still has circuit xyz, maybe show it datapoint 67559438 a couple more times so that it learns geography better", or even directly editing networks).
@niplav I'm not sure how much of the magic of LLM is that the input and output are both text.
If we can get something that learns from videos, they may be more value in that.
I expect that the text -> art bots will have similar limitations, but probably decoupled from the text -> text ones.