@niplav I'd guess: the more of what they're doing is conversational, the more it gets absorbed into status games. The more it is physical, the less.
(In this model, publishing academic papers is conversational.)
If you are building houses or doing work in a lab there's less space to posture. If you're just talking, there's more.
@cosmiccitizen Hatred is bad for you.
But then, I can join you in the Bodhisattva except for my one enemy club.
@niplav this is a really good question that I can't even begin to conceptualize how to estimate the answer.
How much counterfactually available outcome-value is left on the table by Hansonian instincts?
I.e. you have a community that tries to achieve X, but they don't achieve X as well as they could because of social status drives. How much better could they achieve X if they didn't have those drives (at same level of intelligence)
@schweeds lol, yeah.
Worse than that though, if the LLM's behavior is generalized from examples, LessWrong is a hotbed of bad examples you wouldn't want your AI to learn from.
@eniko I have not had a headache, but I've had long and deep sleep the past few days. Probably making up for sleep debt, but starting to drift off in the early evening is throwing me for a loop.
Update. Turns out that John #Deere has been using open code under the #GPL w/o living up to the license. The Software Freedom Conservancy (@conservancy) is calling on it to comply — which would greatly enhance #farmers' #RightToRepair.
https://sfconservancy.org/blog/2023/mar/16/john-deere-gpl-violations/
"We…publicly call on John Deere to immediately resolve all of its outstanding GPL violations…by providing complete source code…that the GPL & other copyleft licenses require, to the farmers & others who are entitled to it."
We could quarantine them all incommunicado on an island with no technology newer than 1920, if you want a humane option.