@rime you were looking into lie detectors and their SOTA, right? Any good resources?

I've become convinced this might be really really important, thanks to you

@niplav 🕯️

To say that I've "looked into it" would be a big exaggeration, but I've looked into it.

The main reason I've been interested in it is: mass adoption of "veracity/credibility tech"¹ seems potentially extremely good for culture and maybe pivotal wrt many large-scale longterm stuff I care abt.

¹(idionym for stuff that helps us prove that we're being honest *when we actually are*)

@niplav There are many levels/dimensions of this w varying degrees of technological feasibility. I think most of the value is unlocked when the tech is (directly or indirectly) relevant to more or less ordinary social interactions, and can interfere w stuff lk "deception arms races"/"iterative escalation of social deception"/"deceptive equilibria".

@niplav But below that, just making it harder to get away w obviously antisocial behaviour (like theft, lying in order to tarnish smbody's reputation / get them fired, etc) seems tremendous. What if being a sociopath makes you unfit for being a politician?
Whew.

For most scenarios that I think are pivotal, I think the tech has to be scalable/cheap, highly accurate, hard-to-hack, and launched by a highly reputable company (preferentially nonprofit, open-source—I'm allowed to dream).

@niplav
fMRI-machines are currently too inaccessible.

Making it hard-to-hack is hard.
Doing the processing on a server, and providing instant results via app connected to the web, may make this more feasible.

If it's a hat, and it needs to be tailored to each individual via upfront calibration at a clinic, the clinic can record your signature and compare it with whatever their servers receive whenever you put your hat on later.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one