Because the babble problem isn't solved, people will learn not to trust the output of an LLM. Simple, raw factual errors will be caught often enough to keep people on their toes.
It will put cheap copywriters out of a job, but will never be good enough for research.
@WomanCorn This feels quite true to me. (Where "new paradigm" could also just be "better activation function found").
We will reach Peak Training Data in the next five years, where you can't improve the model by feeding it more training data because you're already using everything worth using.