Because the babble problem isn't solved, people will learn not to trust the output of an LLM. Simple, raw factual errors will be caught often enough to keep people on their toes.
It will put cheap copywriters out of a job, but will never be good enough for research.
Reminder that you shouldn't listen to me about anything. I'm a dilettante and my knowledge is a mile wide an an inch deep.