@rime you were looking into lie detectors and their SOTA, right? Any good resources?
I've become convinced this might be really really important, thanks to you
@niplav 🕯️
To say that I've "looked into it" would be a big exaggeration, but I've looked into it.
The main reason I've been interested in it is: mass adoption of "veracity/credibility tech"¹ seems potentially extremely good for culture and maybe pivotal wrt many large-scale longterm stuff I care abt.
¹(idionym for stuff that helps us prove that we're being honest *when we actually are*)
@niplav There are many levels/dimensions of this w varying degrees of technological feasibility. I think most of the value is unlocked when the tech is (directly or indirectly) relevant to more or less ordinary social interactions, and can interfere w stuff lk "deception arms races"/"iterative escalation of social deception"/"deceptive equilibria".
@niplav But below that, just making it harder to get away w obviously antisocial behaviour (like theft, lying in order to tarnish smbody's reputation / get them fired, etc) seems tremendous. What if being a sociopath makes you unfit for being a politician?
Whew.
For most scenarios that I think are pivotal, I think the tech has to be scalable/cheap, highly accurate, hard-to-hack, and launched by a highly reputable company (preferentially nonprofit, open-source—I'm allowed to dream).
@niplav
fMRI-machines are currently too inaccessible.
Making it hard-to-hack is hard.
Doing the processing on a server, and providing instant results via app connected to the web, may make this more feasible.
If it's a hat, and it needs to be tailored to each individual via upfront calibration at a clinic, the clinic can record your signature and compare it with whatever their servers receive whenever you put your hat on later.
@niplav Oh, I hv an abandoned LW draft here: https://www.lesswrong.com/editPost?postId=gt9A3bNxnZRGewMZF&key=6857a642401e8c373cd799e1959f9e
@niplav j some link:
"An increase in selfish motivation for Pareto lies was associated with higher mean-level activity in both ventral and rostral mPFC. The former showed an increased pattern similarity to selfish lies, and the latter showed a decreased pattern similarity to altruistic lies. … Our findings demonstrated that hidden selfish motivation in white lies can be revealed by neural representation in the mPFC."
@niplav Perhaps the most impressive examples of brain-reading tech in the vicinity of lie-detection is semantic reconstruction, eg:
BCI Award 2023 #1: https://www.youtube.com/watch?v=Q1rctJd37a8&list=PL_JwSzOwE-dS0u9NNhv8__XktdDZaq_ML
BCI Award 2022 #2: https://twitter.com/guillefix/status/1679178300508504064
@niplav I wud *prefer* u did all and posted w u's own name :p
tho I not net-disprefer name-on-post.
I is financially secure and do a fairly indep agenda (q ib to remain indep for alst ~2 add years?), so marginal reputation points is not much usefwl for me.
fm my perspective, utility of q post, is entirely its altruism—q ib is substantial (tho risky), so I hope u publish.
I can prob review/comment/take-questions or smth, tho, if wish.
ib↦"I believe"
q↦anaphor (incl "which")
indep↦〃-endent
@niplav o i forgor: "alst"↦"at least".
forgor↦forgor
@rime i can add my stuff to the draft and we co-publish?
But probably not before August on my side