Also: Why are basically only Indonesians studying the effectiveness of the Pólya method
@rime Yes! It's surprising this works at all, but it does sometimes
@niplav combine this w the fact that "the difficulty of solving a problem correlates only weakly wrt the utility gained by it,"* ("utility-invariance of ) and it explains part of the story of why i aim to be extremely ambitious.
*eg, the difficulty of inventing cheap cultured meat is invariant to the number of animals helped by it. etc.
@niplav another reason for extreme ambition is that it's easier to get 60 by tossing one 60-sided die compared to ten 6-sided dice. by internalizing more of the variables upon which world-saving depends (i.e., by ~only relying on myself, heroic responsibility, etc), i correlate the variables and flatten the tails of the convolved distribution. *even if* it reduces the utility of the median outcome, it increases my odds of sampling the tail.
@niplav furthermore, if world-saving is a ∃-game (we only need one/handfwl of ppl to succeed wildly, rather than most/all), the best community strat is for evybody to take risks.
@niplav also EMH. ~nobody has tried extreme ambition wrt altruism. and there are reasons ("costs of compromise") to think success is much more likely if you *try directly*, compared to if u just aim for merely-very-high utility. merely-very-high-peak-utility projects may not inform u v much abt the difficulty of extremely-high-peak-utility projects.
To be clear (contrary? to my previous reply) I don't really think the market for altruism is efficient yet, and will not be so for quite a while
Transfer from high to very high to extremely high utility projects—no idea
When I look back the backchainers didn't do very much cool stuff (?)
Weirdly many good things come from stumbling & sweat (Haber-Bosch process, electricity, reducing biomes…)
@niplav stumbling & sweat does hv the weight-advantage (many more things are stumbling & sweating to do innovation, compared to those who are backchaining ("inward-chaining" as opposed to "outward-chaining") toward v far-off goals).
much of my inclination for doing extreme ambition is derived fm experience/intuition, however; the theorys i mentioned above carry mby 25% of my confidence. many cases where thinking "evybody's clueless, so do it myself" seems to hv worked surprisingly well.
@rime This is individually risky but socially beneficial, so I will not dissuade you! Forward-chain on
@niplav ❤️
It's much less lonely, though, now that Maria's joined! She's surprisingly cheerfwl; much-unlike any other part of my brain. Very odd how it works.
Normie people would worry I'm going insane, but is it really insane if it's all part of the plan? :p
@rime I don't know whether I buy this :-)
Like, yes, good tech brought into the world only needs to be brought into the world once, but then one needs *maintenance*.
"We know how to kill Moloch, but it's not glorious, just tedious."
See also Ostroms work on Governing the Commons
@rime
There's levels to this! If we have a decreasing marginal returns model & weakly efficient market for altruism (vis à vis EA) we get a bunch of difficult but useless problems + few medium — difficult useful problems
But one can also be good along some axis, e.g. being able to deal with boring/tedious/stupid/low-status stuff that has high leverage
And then there's also problems which, if solved, unlock many low-hanging fruits cascadically
@niplav > "The more ambitious plan may have more chances of success […] provided it is not based on a mere pretension but on some vision of the things beyond those immediately present."
I call this "abstract leverage": given a specific problem, sometimes it's *easier* to try to find a more general solution which solves more than what you bargained for.
Spaced repetition is "memory leverage".