We will reach a point of diminishing returns on increasing parameters within the next 20 years, where the cost of hardware to increase parameter counts isn't worth the increase in value you get from the model.
We will reach Peak Training Data in the next five years, where you can't improve the model by feeding it more training data because you're already using everything worth using.
Because the babble problem isn't solved, people will learn not to trust the output of an LLM. Simple, raw factual errors will be caught often enough to keep people on their toes.
It will put cheap copywriters out of a job, but will never be good enough for research.
The babble problem will not be solved. Effectively ever. It cannot be solved without a major change in architecture.
_Fullmetal Alchemist: Brotherhood_ is top rated for a reason.
_Steins;Gate_ is my all time favorite.
_Kaguya-sama: Love is War_ is laugh-out-loud hilarious.
_Gurren Lagann_ is full throttle badassery.
_Clannad_ + _Clannad: After Story_ will make you cry.
_Puella Magi Madoka★Magica_ is good, but not what it looks like on the cover.
_Kill La Kill_ is outrageous.
_Yuru Camp_ is totally cozy.
_Cyberpunk: Edgerunners_ is excellent.
_Kaguya-sama: Love is War -The First Kiss That Never Ends-_
Always wonderful to see more of this manga adapted. (I hope they do the whole thing, but evidence of that is thin on the ground.)
Serves as a good endcap anyway.
I wonder how far forward you could port a Datapoint 2200 (1970) program to run modern x86 computers. The Intel 8008 was originally made as a single-chip version of the Datapoint 2200's TTL-based CPU but was then rebranded as a microprocessor, so it's directly compatible. The 8080 is translatable to the 8008, where you can run a simple conversion tool on the 8008 assembly to get functionally equivalent 8080 assembly; and the same for the 8086. After the 8086 all x86 computers have maintained binary backwards compatibility; although every new major version makes it harder to actually make use of that compat.
But it would be amusing to get a Datapoint 2200 program running on a brand-new x86 computer.
For me this is the last nail in the coffin for #Go.
I've never bought much into the language. I've been impressed by its constructs to natively manage and synchronize asynchronous operations, but its rigidity when it comes to programming paradigms (no proper object-oriented and functional constructs in the 21st century, seriously?) means that I see it as a language that seriously limits expressivity, and doomed to generate a lot of boilerplate. It's a language very good at solving the types of problem that are usually solved at Google (build and scale large services that process a lot of stuff in a way that the code looks the same for all the employees), and little more than that.
After #Rust really took off, I didn't see a single reason why someone would pick Go.
And now here we go with the last straw: Google has proposed to embed telemetry collection *into the language toolchain itself*. And, according to Google, it should be enabled by default (opt-out rather than opt-in), because, of course, if they make it an opt-in then not many people will explicitly enable a toggle that shares their source code and their usage of the compiler with one of today's biggest stalkers.
Not only, but Google went a bit further: "I believe that open-source software projects need to explore new telemetry designs that help developers get the information they need to work efficiently and effectively, without collecting invasive traces of detailed user activity".
No. Open-source doesn't need telemetry. Telemetry introduces brittle dependencies on external systems with no functional net gain, and that's at odds with the whole idea of building and running things on your own.
Open-source software has already a very well-established way of collecting feedback: open an issue on the project, and if you want to change something submit a PR. You don't need remote probes whose purpose is to funnel data back home. Even when done with the best intentions, that breaches the trust between the developer and the user - because data gets scooped out, and the systems that store and use that data aren't open. But, of course, if you've only used hammers in your life then the whole world will look like nails.
This could even backfire for Google. There are many applications out there where secrecy (and minimizing the amount of data that leaks outside of the network) is a strong requirement. These applications may start considering alternatives to a language that enables telemetry data back to an external private company by default.
If you build open-source projects in Go, it's time to drop it and start considering alternatives. The market for modern compiled language is much more competitive now than it was a decade ago. Many of us knew already that we couldn't trust a programming language developed by the largest surveillance company on the planet.
https://www.theregister.com/2023/02/10/googles_go_programming_language_telemetry_debate/