Show newer

@nyx @nemesis I for one look forward to the BLIT parrot corrupting my pathways to turn my money into subway-sandwich-kilometres

@sim @galena
This was the thing I was gesturing at: if consumers have nothing to do with carbon emissions, how come they are hit by carbon taxes?

As for the poor and working class being hit hardest, that seems true, since carbon taxes would probably be proportional to consumption, and richer people probably save/invest more. I consider "there's inequality" a separate problem and to be addressed separately, e.g. by redistribution.

@mira
Interesting case study of attack/defense ratio here: My mind immediately went to "But GPT-N can also *generate* boilerplate legalese, so they'll just add more."

But confabulation is a problem here! You don't want confabulated text in your laws, and adding 20k pages is problematic. But they could strategy-steal: Let GPT-N legalese it out, *check* via GPT-N whether it works, and then dump it on the opposition.

@galena I apologize, the last sentence was a bit too snarky. Better version would be

"Consumers show their revealed preferences by being against carbon taxes (which are, like, clearly the right way to price this externality). That likely wouldn't be the case if their contribution was minimal."

@galena
I feel like this is misleading, consumers are *upstream* in the causal chain of corps producing carbon, and definitely not disconnected.

Righting the question would be something like "where can we intervene in what is happening to reduce carbon emissions", where corporations are probably the better node.

(But then! Consumers are *against* high carbon taxes! How could that possibly be 🤔 🤔 )

🤔 🤔

Embarassment is a low status emotion, right?

Show thread

@WomanCorn
Especially if we get good mechanistic interpretability, there'd be some nice boundary conditions to use during training ("oh, this model clearly still has circuit xyz, maybe show it datapoint 67559438 a couple more times so that it learns geography better", or even directly editing networks).

@WomanCorn This feels quite true to me. (Where "new paradigm" could also just be "better activation function found").

@WomanCorn
Hm. This feels too pessimistic ("pessimistic") to me.

I guess if I take LLM very narrowly, then yes, we're running out of training data. But we have much much video data {{cn}} and can much more easily generate more, *and* I have an inkling that there's some alpha left in generating training true training data+doing RLHF with real-world prediction.

I guess I think we can probably reduce the confabulation problem enough so that it doesn't matter *as much*.

@chjara Hm fair.

Although I'm unhappy in a world where string operations are heavily arch dependent.

@MiaWinter "i think Modula is a beautiful name for a girl"

@chjara

Good take.

I'd phrase it differently, e.g. I don't think the compiler should do much more (integer types ok, stdarg maybe, everything else nah), but a lot of the stuff can go into external libraries.

do not talk to philosophers. Do not engage in philosophy. Eschew everything that starts with "meta". Do NOT give them a platform. I am so done with this.

niplav boosted
actually, first a short rant
i hate the libc
even outside the fact it's 99% antiquated nonsense you should never use,
a lot of it (integer types, stdarg, math functions, string/memory operations) should be handled by the compiler instead of the libc - in fact, most of the time libcs do these by just stubbing compiler intrinsics, which bruh
then stuff like memory allocation, file management, and really most IO-adjacent operations are really application/system-specific and should be put in a separate library instead of the libc proper
now you might say, wait then what would remain in the libc

exactly

Is embarassment arousing? | Gender

shall i read the posts from the 2021 lw review

Show older
Mastodon

a Schelling point for those who seek one