@WomanCorn
Hm. This feels too pessimistic ("pessimistic") to me.
I guess if I take LLM very narrowly, then yes, we're running out of training data. But we have much much video data {{cn}} and can much more easily generate more, *and* I have an inkling that there's some alpha left in generating training true training data+doing RLHF with real-world prediction.
I guess I think we can probably reduce the confabulation problem enough so that it doesn't matter *as much*.