Humans usually feel no obligation to behave flexibly or responsively to specific situations in a way that demonstrates agency if it does not serve their interests.

When a human moves into this mode, I find communication with them has little bearing on material reality -- only social/relational information.

You can often, by emotional force, exact one-shot work from people who are Not Your Friend and therefore behaving incoherently but I normally find this kind of thing tiresome and not worth it.

You can also sometimes Harm these ppl, usually emotionally, if you can inject a problem for them into the situation which requires agency. If they don't shift into that agential mode fast enough, you can produce and exploit errors.

I tend to dislike systems without humans bc I tend to be an unusually reliable/upstanding person imo -- and systems without humans lump me in with other ppl.

But this reveals an implicit assumption that humans will act more prosocially than systematic processes... which isn't particularly true a lot of the time

Follow

Humans in practice can act much worse than automated systems on account of extra illegibility and agency. For example, small landlords like the one that stole my friend's bike are worse than corporate landlords.

We might view many automated systems as a way of allowing us to more easily cut contact with those we expect will behave even worse.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one