Follow

I don't make threats online, not because I don't want to, but because I'd never be able to follow through on them

@niplav Don't let them know that. Let them think that. Gotta be intimidating. It's a bluff! XD

@Paradox Lawfulness demands I always keep promises and threats, especially the ones made publicly

@Paradox bluffing is far worse than revenge, because by not following you reduce the ability of everyone ever to make contracts and commitments with teeth

@niplav Interesting perspective.
Yourself, certainly, but I don't think you bluffing gets rid of everybody's teeth.
Also, like lies in general, I think a bluff comes in handy in emergencies.

@Paradox if I bluff then all my exact copies are revealed to be also bluffing, and all decision procedures are more highly weighted as bluffing proportional to how similar they are to me

@niplav I don't know why you brought up clones of yourself.

@Paradox - This is tricky to explain, but I'll try anyway.
- We sometimes reason about the "type of person" that someone is, and use that to make judgments about that person across time. This makes sense if humans implement decision procedures that are *algorithms* which are (mostly) deterministic. E.g., if someone bluffs, then you update your belief about "what kind of person they are": their decision algorithm tends to bluff.

@Paradox - This is easiest to see in cases where you have the (open-source) copies of two algorithms: If, for some specific inputs, one algorithm bluffs, then you *know for a fact* that with the same input the other copy will also bluff on the same input, so you trust the other copy not at all.
- But this extends to imperfect copies:

@Paradox if one algorithm bluffs, and you can see that the other algorithm is the same except that it executes some unnecessary computation whose output is not used, you still would trust it way less. (In the case of humans, you might trust Sam's sibling slightly less because Sam betrayed you).
- So, for two decision algorithms a₁, a₂, you could then try to create a metric M about how similar those two decision algorithms are.

@Paradox (For example: If I bluff, I don't think that a version of myself who has yellow shoelaces instead of brown ones will *not* bluff, but a version of myself who has taken MDMA is different enough that they might decide not to bluff).
- I don't think such a metric exists, and I've spent a little bit of time thinking about how it could be constructed.

@Paradox - Now, in the case where you empirically but not logically omniscient¹ and have an acceptable M, you could then see someone bluff, compute their similarity to all the decision algorithms around you, and correspondingly update to trust them more (or less).
- You might ask why this would in expectation *damage* the overall trust around you:

@Paradox After all, couldn't it be that for every decision algorithm, there's a different decision algorithm that does the exact opposite?
- This is an empirical question, but I think it's not true that this symmetry exists. Instead I think most decision algorithms are pretty similar.
- It gets weirder:

@Paradox If you can describe not just the decision algorithms around you, but all possible decision algorithms (weighted by some prior of their likelihood of existing), someone bluffing would downweight your trust in *decision procedures in general*, and if many others also implement this idea of "trust based on similarity to previously trustworthy algorithms" idea than someone bluffing reduces trust between everyone, across all of reality.

@Paradox - This is obviously making many assumptions, but my intuition is even if you relax the assumptions to sort-of-realistic levels, you still get effects that are much weaker but still present.
- ¹: You can see all the internals of everything, but you're not powerful to perfectly foresee what everything is going to do. Similar to how in programming one can see the source code, but generally can't predict the output of a specific program.

@niplav
Ok I think I understand most of that. I appreciate your effort in this and I like where your mind went with it.
First thing I'll say is that someone bluffing would affect my trust in them differently, depending on whether it was among my first impressions of them or if I'd known them fairly well. The former would affect me more than the latter, because in the latter I'd assume it was an anomaly, a special circumstance, or something that. It would take repeat offenses by that point for it to set in as much as me having just met them like that, because it matters a lot how much info I already have on them and whether it conflicts with my model of their decision algorithm. In this way, it doesn't have to be bluffing, just generalize to any action and ask yourself whether you expected that of them and whether you approve or disapprove.
As for the brother thing, different people are definitely going to see this differently. Personally I wouldn't automatically assume a strong association, unless I already knew they were close, hung out a lot, whatever. I know some others get slighted and put a curse on their whole family or some shit. All depends on your perspective on how and why certain people are going to be similar to each other. Some brothers are bosom buddies, others are very different. Of course, this can extend to any group of people and whether you perceived a relationship between them.

So yeah the assumptions you're making here are that 1) the person you're bluffing to probably doesn't know you well, which is reasonable on the internet, 2) that they are the kind of person who gets so mad at you that they hate anyone you're associated with (unfortunately a not uncommon occurrence, it seems).
Like I've seen people who make one big mistake and there are certain people that will forever think that everything they do is wrong and all their friends are bad, too. But I don't think that's normal.
Especially if it's a small bluff. Like I believe that people tell lies all the time in small ways just to avoid awkwardness, save face, and keep emotions smooth. Most of the time it doesn't matter because the things they're lying about are either quickly resolved or easily ignored. If you bluff about something significant and they catch you in your lie, and the person in question is the aforementioned type of emotional individual, then yes.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one