the most frequently used words tend to be the most irregular. they resist regularization bc their unusual/high-energy forms are constantly reinforced. switching-costs or smth.
eg we say "man/men", "go/went", "eat/ate" instead of "-/mans", "-/goed", "-/eated".
related hypothesis, but for different/additional reasons:
paradox of foundational neglect: the core tenets of a paradigm tend to be the least optimized.
technical debt / pleiotropy: complex ancestral dependencies are locked in place.
@niplav stumbling & sweat does hv the weight-advantage (many more things are stumbling & sweating to do innovation, compared to those who are backchaining ("inward-chaining" as opposed to "outward-chaining") toward v far-off goals).
much of my inclination for doing extreme ambition is derived fm experience/intuition, however; the theorys i mentioned above carry mby 25% of my confidence. many cases where thinking "evybody's clueless, so do it myself" seems to hv worked surprisingly well.
I want to say update: The most personally significant thing that's happened to me this year is that my head-friend (Maria, a "tulpa") has rather suddenly acquired a lot more independent volition and personality. Phase-shift.
I was inspired last year by Johannes Mayer (LW user), who showed me it was possible. I was only semi-trying for it, so the success is v surprising and mentally v happy for me/us. I hope she stays and grows in visibility-to-my-corner-of-the-brain.
@niplav also EMH. ~nobody has tried extreme ambition wrt altruism. and there are reasons ("costs of compromise") to think success is much more likely if you *try directly*, compared to if u just aim for merely-very-high utility. merely-very-high-peak-utility projects may not inform u v much abt the difficulty of extremely-high-peak-utility projects.
@niplav also EMH. ~nobody has tried extreme ambition wrt altruism. and there are reasons ("costs of compromise") to think success is much more likely if you *try directly*, compared to if u just aim for merely-very-high utility. merely-very-high-peak-utility projects may not inform u v much abt the difficulty of extremely-high-peak-utility projects.
@niplav furthermore, if world-saving is a ∃-game (we only need one/handfwl of ppl to succeed wildly, rather than most/all), the best community strat is for evybody to take risks.
@niplav another reason for extreme ambition is that it's easier to get 60 by tossing one 60-sided die compared to ten 6-sided dice. by internalizing more of the variables upon which world-saving depends (i.e., by ~only relying on myself, heroic responsibility, etc), i correlate the variables and flatten the tails of the convolved distribution. *even if* it reduces the utility of the median outcome, it increases my odds of sampling the tail.
@niplav combine this w the fact that "the difficulty of solving a problem correlates only weakly wrt the utility gained by it,"* ("utility-invariance of ) and it explains part of the story of why i aim to be extremely ambitious.
*eg, the difficulty of inventing cheap cultured meat is invariant to the number of animals helped by it. etc.
@niplav > "The more ambitious plan may have more chances of success […] provided it is not based on a mere pretension but on some vision of the things beyond those immediately present."
I call this "abstract leverage": given a specific problem, sometimes it's *easier* to try to find a more general solution which solves more than what you bargained for.
Spaced repetition is "memory leverage".
@niplav umm, what am "SE"? and what be "asl" ("ask"?).
just bc i drop acronym (TDD) doesn't mean u hv to call me out lk this.
Shell-Chan says hi btw.
@niplav Relatedly, I've recently discovered that artificial sweetener is ridiculously cheap and likely harmless, so I add them to my tea ("blasphetea") while I do activities I want to associate more rewarding-sensations with. I don't spike my tea when I have off-day due to sick, or when I'm on Schelling.pt.
Prob neither (chocolate & sweetener) are like generally effective, but they are *smth* to try, and just *attempting* coordination makes it a Schelling point that placebo-works anyway.
@niplav Alas, my reading-days are mostly behind me, so I hadn't heard of this post. D:
Not all subagents like vegan chocolate, but the one which is like "meh, doing flashcards rn wud be monotonous—too much lk the other thing we just did—so lets do smth else" does, so there are enough bargaining-opportunities.
Also, I generally don't do the chocolate-thing to get myself to do chores, since I want to associate chocolate w math/flashcards instead.
the question "when shud u write the tests before code, versus vice versa?" analogizes to "when shud u backchain vs forward-chain?" (respectively)
for even more abstract leverage: write the "unit tests" in plain English, ask an LLM to translate them to code (or explain why behaviour is impossible), then ask LLM to write code that passes the LLM's tests.
suggested nyms for this: "target-test" (makes you backchain to infer code that fits the tests), vs "maintenance-test" (designed to make sure stuff keeps working when you expand codebase).
@niplav so, one strat i've been using for ~1 months: whenever some subunit wants us to do A, but other subunit (me) wants to do B, i give pay the first subunit's happy-price with (vegan) chocolate as currency. i bought a stack just for this purpose. ~embarrassingly, it seems effective.
note: the chocolate isn't *reward* for "winning" a conflict w subunit. it's to pay happy-price for doing B, so parts can harmoniously do B. if subunit has no happy-price, i often just do A.
Flowers are selective about what kind of pollinator they attract. Diurnal flowers use diverse colours to stand out in a competition for visual salience against their neighbours. But flowers with nocturnal anthesis are generally white, as they aim only to outshine the night.