Show newer

@rime i get cirlces are triangles now! Half circles are overweight triangles

niplav boosted

Pleased to announce the newest Mastodon feature on woof.group: antitemporal quote toots. Simply write a new toot and click the retrocausal boost 🔃 button below the text field, then select any existing toot from your feed. That existing toot will be altered to embed your new toot, as if it had been quoting you all along.

We're trusting our users to be polite when anti-quoting strangers' toots, so please, be kind! ❤️🙏✨

niplav boosted

I observe often that I don't care as much about participating in topics that were past battles of previous internet generations—e.g. IQ & heritability fights, eugenics fights, open borders fights, feminism/harassment fights, privacy fights &c. But there surely was the second-to-last internet generation that fought different battles—what were they? I can think of encryption wars (Bernstein vs. US gov), abortion (?), atheism & religion of course, which others?

The solution to sexual harassment in a community is that all the men become romantically and sexually formidable outside of the community. Every other solution results in too much repression, or in clumsy advances, or in a suboptimal number of romantic and sexual relationships

@agentydragon someone needs to review the entire evidence here

In general havong Debate be not robust enough to ~always work (see also Obfuscated Arguments) is a bad sign

@Paradox - This is obviously making many assumptions, but my intuition is even if you relax the assumptions to sort-of-realistic levels, you still get effects that are much weaker but still present.
- ¹: You can see all the internals of everything, but you're not powerful to perfectly foresee what everything is going to do. Similar to how in programming one can see the source code, but generally can't predict the output of a specific program.

@Paradox If you can describe not just the decision algorithms around you, but all possible decision algorithms (weighted by some prior of their likelihood of existing), someone bluffing would downweight your trust in *decision procedures in general*, and if many others also implement this idea of "trust based on similarity to previously trustworthy algorithms" idea than someone bluffing reduces trust between everyone, across all of reality.

@Paradox After all, couldn't it be that for every decision algorithm, there's a different decision algorithm that does the exact opposite?
- This is an empirical question, but I think it's not true that this symmetry exists. Instead I think most decision algorithms are pretty similar.
- It gets weirder:

@Paradox - Now, in the case where you empirically but not logically omniscient¹ and have an acceptable M, you could then see someone bluff, compute their similarity to all the decision algorithms around you, and correspondingly update to trust them more (or less).
- You might ask why this would in expectation *damage* the overall trust around you:

@Paradox (For example: If I bluff, I don't think that a version of myself who has yellow shoelaces instead of brown ones will *not* bluff, but a version of myself who has taken MDMA is different enough that they might decide not to bluff).
- I don't think such a metric exists, and I've spent a little bit of time thinking about how it could be constructed.

@Paradox if one algorithm bluffs, and you can see that the other algorithm is the same except that it executes some unnecessary computation whose output is not used, you still would trust it way less. (In the case of humans, you might trust Sam's sibling slightly less because Sam betrayed you).
- So, for two decision algorithms a₁, a₂, you could then try to create a metric M about how similar those two decision algorithms are.

@Paradox - This is easiest to see in cases where you have the (open-source) copies of two algorithms: If, for some specific inputs, one algorithm bluffs, then you *know for a fact* that with the same input the other copy will also bluff on the same input, so you trust the other copy not at all.
- But this extends to imperfect copies:

@Paradox - This is tricky to explain, but I'll try anyway.
- We sometimes reason about the "type of person" that someone is, and use that to make judgments about that person across time. This makes sense if humans implement decision procedures that are *algorithms* which are (mostly) deterministic. E.g., if someone bluffs, then you update your belief about "what kind of person they are": their decision algorithm tends to bluff.

Show older
Mastodon

a Schelling point for those who seek one