"Research published in 2007[^78] has shown that cigarette smokers suffering damage to the insular cortex, from a stroke for instance, have their addiction to cigarettes practically eliminated. These individuals were found to be up to 136 times more likely to undergo a disruption of smoking addiction than smokers with damage in other areas."
"In 2021, two meta-analyses on preference measurement in experimental economics find strong evidence for greater male variability for cooperation (variance ratio: 1.30, 95% CI [1.22, 1.38]), time preferences (1.15 [1.08, 1.22]), risk preferences (1.25 [1.13, 1.37]), dictator game offers (1.18 [1.12, 1.25]) and transfers in the trust game (1.28 [1.18, 1.38])."
Let's say you have a programming language C, and a programming language C+.
Every valid C program is a valid C+ program, but not the other way around.
There can be at least two ways in which this happens:
1. C+ is "stricter" than C, but the strictness is optional: you can e.g. add type signatures that don't exist in C.
2. C+ is more "featureful" than C. E.g. it adds generics or whatever.
Is there a way to distinguish those two rigorously? Or is there no underlying deep difference?
If you press the button, the top 10% earners in the world (who hold 85% of the world's wealth) will be forced to forfeit their earnings and be given a fixed allowance equal to 5% their prior income.
Their money'll be distributed to everyone else, proportionally according to need.
Do you press the button?
I operate by Crocker's rules[1].