Show newer

Lots of good ideas in @jamesshore's article on testing:

"This pattern language... doesn’t use broad tests, doesn’t use mocks, doesn’t ignore infrastructure, and doesn’t require architectural changes."

jamesshore.com/v2/projects/nul

<There's a secret argument which is convincing, but I promised not to tell it to you> is one of the worst things.

It would be nice if there was a reliable framework for judging changes made to art in order to sell better in the marketplace.

What's reasonable and what's bad. What about decisions made based on marketplace concerns at the time of creation vs in revision.

"The Early Internet Era" aka when you first got access to a computer in a place where your parents couldn't look over your shoulder at any moment.

For people just entering, SMS 2-factor authentication is only one flavor of 2-factor authentication, and one known to be less secure.

If you have the option, you should use a 2-factor app, of which there are many.

Modify /bin/ld to play the Sword Art Online "Link Start!" sound effect before it starts linking large binaries.

Today on the discussion board:

It's very important not to misgender... Ungoliant, spider-demon who plunged Valinor into darkness by destroying the two trees.

Ungoliant's pronouns are she/her.

LLMs are not sentient, and are not people, but behaving towards them in a way that it would be bad to behave towards people is probably bad for you.

Training yourself to be cruel is bad for you.

Reminder that you shouldn't listen to me about anything. I'm a dilettante and my knowledge is a mile wide an an inch deep.

Show thread

In 30 years, LLMs will be used for short text generation in products that aren't considered to be AI anymore.

Show thread

We won't ever hit Peak Parameters, because a new paradigm will appear and draw people away from LLMs before we do.

Show thread

We will reach a point of diminishing returns on increasing parameters within the next 20 years, where the cost of hardware to increase parameter counts isn't worth the increase in value you get from the model.

Show thread

We will reach Peak Training Data in the next five years, where you can't improve the model by feeding it more training data because you're already using everything worth using.

Show thread

Because the babble problem isn't solved, people will learn not to trust the output of an LLM. Simple, raw factual errors will be caught often enough to keep people on their toes.

It will put cheap copywriters out of a job, but will never be good enough for research.

Show thread

The babble problem will not be solved. Effectively ever. It cannot be solved without a major change in architecture.

Show thread
Show older
Mastodon

a Schelling point for those who seek one