Show newer

@AraAraBot I feel like you have definitely failed your mandate here.

@niplav mine or theirs?

(No for both, generally. With an exception for cute blush.)

@niplav I'm not sure how much of the magic of LLM is that the input and output are both text.

If we can get something that learns from videos, they may be more value in that.

I expect that the text -> art bots will have similar limitations, but probably decoupled from the text -> text ones.

Today on the discussion board:

It's very important not to misgender... Ungoliant, spider-demon who plunged Valinor into darkness by destroying the two trees.

Ungoliant's pronouns are she/her.

LLMs are not sentient, and are not people, but behaving towards them in a way that it would be bad to behave towards people is probably bad for you.

Training yourself to be cruel is bad for you.

Reminder that you shouldn't listen to me about anything. I'm a dilettante and my knowledge is a mile wide an an inch deep.

Show thread

In 30 years, LLMs will be used for short text generation in products that aren't considered to be AI anymore.

Show thread

We won't ever hit Peak Parameters, because a new paradigm will appear and draw people away from LLMs before we do.

Show thread

We will reach a point of diminishing returns on increasing parameters within the next 20 years, where the cost of hardware to increase parameter counts isn't worth the increase in value you get from the model.

Show thread

We will reach Peak Training Data in the next five years, where you can't improve the model by feeding it more training data because you're already using everything worth using.

Show thread

Because the babble problem isn't solved, people will learn not to trust the output of an LLM. Simple, raw factual errors will be caught often enough to keep people on their toes.

It will put cheap copywriters out of a job, but will never be good enough for research.

Show thread

The babble problem will not be solved. Effectively ever. It cannot be solved without a major change in architecture.

Show thread

@delca

_Fullmetal Alchemist: Brotherhood_ is top rated for a reason.

_Steins;Gate_ is my all time favorite.

_Kaguya-sama: Love is War_ is laugh-out-loud hilarious.

_Gurren Lagann_ is full throttle badassery.

_Clannad_ + _Clannad: After Story_ will make you cry.

_Puella Magi Madoka★Magica_ is good, but not what it looks like on the cover.

_Kill La Kill_ is outrageous.

_Yuru Camp_ is totally cozy.

_Cyberpunk: Edgerunners_ is excellent.

Social Reasoning is when you look around you to see what other people are saying, then conclude that that must be true.

@panchromaticity this is a systemic failure of society's ability to recommend books.

(I don't hate HPMOR. It's fine. But it's not top tier.)

@delca

_Fullmetal Alchemist: Brotherhood_ is top rated for a reason.

_Steins;Gate_ is my all time favorite.

_Kaguya-sama: Love is War_ is laugh-out-loud hilarious.

_Gurren Lagann_ is full throttle badassery.

_Clannad_ + _Clannad: After Story_ will make you cry.

_Puella Magi Madoka★Magica_ is good, but not what it looks like on the cover.

_Kill La Kill_ is outrageous.

_Yuru Camp_ is totally cozy.

_Cyberpunk: Edgerunners_ is excellent.

ChatGPT shows us how much work it is to keep an LLM on the rails.

Bing shows us how bizarre it can be when it goes off the rails.

Have you noticed that ramen noodles mostly taste the same regardless of what alleged flavor they are?

Good news: they have invented a flavor that doesn't do this.

Bad news: It's "chili" flavor and tastes like Minnesota tacos.

Show older
Mastodon

a Schelling point for those who seek one