@rime you were looking into lie detectors and their SOTA, right? Any good resources?

I've become convinced this might be really really important, thanks to you

@niplav 🕯️

To say that I've "looked into it" would be a big exaggeration, but I've looked into it.

The main reason I've been interested in it is: mass adoption of "veracity/credibility tech"¹ seems potentially extremely good for culture and maybe pivotal wrt many large-scale longterm stuff I care abt.

¹(idionym for stuff that helps us prove that we're being honest *when we actually are*)

@niplav There are many levels/dimensions of this w varying degrees of technological feasibility. I think most of the value is unlocked when the tech is (directly or indirectly) relevant to more or less ordinary social interactions, and can interfere w stuff lk "deception arms races"/"iterative escalation of social deception"/"deceptive equilibria".

@niplav But below that, just making it harder to get away w obviously antisocial behaviour (like theft, lying in order to tarnish smbody's reputation / get them fired, etc) seems tremendous. What if being a sociopath makes you unfit for being a politician?
Whew.

For most scenarios that I think are pivotal, I think the tech has to be scalable/cheap, highly accurate, hard-to-hack, and launched by a highly reputable company (preferentially nonprofit, open-source—I'm allowed to dream).

@niplav
fMRI-machines are currently too inaccessible.

Making it hard-to-hack is hard.
Doing the processing on a server, and providing instant results via app connected to the web, may make this more feasible.

If it's a hat, and it needs to be tailored to each individual via upfront calibration at a clinic, the clinic can record your signature and compare it with whatever their servers receive whenever you put your hat on later.

@niplav j some link:
"An increase in selfish motivation for Pareto lies was associated with higher mean-level activity in both ventral and rostral mPFC. The former showed an increased pattern similarity to selfish lies, and the latter showed a decreased pattern similarity to altruistic lies. … Our findings demonstrated that hidden selfish motivation in white lies can be revealed by neural representation in the mPFC."

jneurosci.org/content/41/27/59

@niplav Perhaps the most impressive examples of brain-reading tech in the vicinity of lie-detection is semantic reconstruction, eg:

BCI Award 2023 #1: youtube.com/watch?v=Q1rctJd37a

BCI Award 2022 #2: twitter.com/guillefix/status/1

@niplav But semantic reconstruction requires >10h in an fMRI machine while calibrating a GPT-like predictor based on e.g. your brain's recognition of audiobook, and I'm unsure how much the training-hours can be optimized. Also not sure whether smth-like-this generalizes to learning to neurally differentiate self-believed statements and self-unbelieved statements with sufficient accuracy. But just based on vibes, the impressiveness of the technology makes me think lie-detection is more feasible.

Follow

@niplav Oh, and I should add just in case: If you plan on writing about it, you're not "scooping" me or any such nonsense. The draft is tiny, 2 years old and I don't plan on picking it up. Yet I really hope somebody makes a good post for the whole ideabag. Please do!

@rime i can add my stuff to the draft and we co-publish?

But probably not before August on my side

@niplav I wud *prefer* u did all and posted w u's own name :p

tho I not net-disprefer name-on-post.

I is financially secure and do a fairly indep agenda (q ib to remain indep for alst ~2 add years?), so marginal reputation points is not much usefwl for me.

fm my perspective, utility of q post, is entirely its altruism—q ib is substantial (tho risky), so I hope u publish.

I can prob review/comment/take-questions or smth, tho, if wish.

ib↦"I believe"
q↦anaphor (incl "which")
indep↦〃-endent

@niplav o i forgor: "alst"↦"at least".

forgor↦forgor

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one