Lots of good ideas in @jamesshore's article on testing:
"This pattern language... doesn’t use broad tests, doesn’t use mocks, doesn’t ignore infrastructure, and doesn’t require architectural changes."
https://www.jamesshore.com/v2/projects/nullables/testing-without-mocks
I am schocked by the w3schools redemption arc.
Reminder that you shouldn't listen to me about anything. I'm a dilettante and my knowledge is a mile wide an an inch deep.
In 30 years, LLMs will be used for short text generation in products that aren't considered to be AI anymore.
We won't ever hit Peak Parameters, because a new paradigm will appear and draw people away from LLMs before we do.
We will reach a point of diminishing returns on increasing parameters within the next 20 years, where the cost of hardware to increase parameter counts isn't worth the increase in value you get from the model.
We will reach Peak Training Data in the next five years, where you can't improve the model by feeding it more training data because you're already using everything worth using.
Because the babble problem isn't solved, people will learn not to trust the output of an LLM. Simple, raw factual errors will be caught often enough to keep people on their toes.
It will put cheap copywriters out of a job, but will never be good enough for research.
The babble problem will not be solved. Effectively ever. It cannot be solved without a major change in architecture.