Let's say you have a programming language C, and a programming language C+.
Every valid C program is a valid C+ program, but not the other way around.
There can be at least two ways in which this happens:
1. C+ is "stricter" than C, but the strictness is optional: you can e.g. add type signatures that don't exist in C.
2. C+ is more "featureful" than C. E.g. it adds generics or whatever.
Is there a way to distinguish those two rigorously? Or is there no underlying deep difference?
If you press the button, the top 10% earners in the world (who hold 85% of the world's wealth) will be forced to forfeit their earnings and be given a fixed allowance equal to 5% their prior income.
Their money'll be distributed to everyone else, proportionally according to need.
Do you press the button?
The most galaxybrained reasoning for the least galaxybrained conclusion: https://forum.effectivealtruism.org/posts/7MdLurJGhGmqRv25c/multiverse-wide-cooperation-in-a-nutshell
uhh I was thinking of C/C++ here, but the other example of Javascript/Typescript is probably enough to dissuade me from this axiom
I operate by Crocker's rules[1].