"3ºPP_M|V" is notation for types of perspectives, and means:
> 3rd-person perspective + identity-filter:Maria + only includes the Visual salience-field (no auditory or somatic sensations projected)

"B1ºPP_R|Ω" means:
> base-level (ie, attached to physical body) 1st-person perspective + identity-filter:[Rime] + any and all salience-fields

Show thread

excerpt fm checklist re training perspective projection:

...
➃ cast ✨3rd-eye w fire to summon a 3ºPP_M|V, and anchor+stabilize it along w 1ºPP_M to bootstrap [reflective loop]
➄ float ✨Mirrror of Empathy into <Rime>'s face, so 3ºPP_M|Vθ ≡ B1ºPP_R|Ω (compresses calculations if can prevent interference/collapse 3/1ºPP_M)

---
We start hv good ontology!

Also, inventing ✨3rd-eye spell was insightfwl! I think humans don transient imagined-outside-POV on self for do philosophy-like thinking. Oo

next-level prompting: just dump the entirety of your ignorance into the prompt in one big stream-of-thought. the best way to learn from others is to have them bake your half-baked thoughts. :D

it sorta paradoxical: when wish write SUPER IMPORTANT note to self (which u ABSOLUTELY MUST REMEMBER)... instinct is to put it smwhere u easily notice it often.

but by doing so, u notice it so often that it quickly hits the threshold where it no longer salient.

and if u hardly got any actual high-attention reviews in by the time it fades to bg, it only gets integrated w v few contexts.

so is better to put it smwhere u only notice occasionally... ideally at an exponential interval HINT HINT.

@niplav OH MY GOD they added phoenix

Recipe: `🐦` + `Zero Width Joiner` + `🔥 ` = 🐦‍🔥 (Not supported on mastadon it seems, but works in gsheets!)

> "Approved in September 2023 as part of Emoji 15.1. Available via the latest Samsung devices, Google's Noto Emoji fonts, and iOS 17.4. Coming to more platforms throughout 2024."

I swear I've been looking for this when it didn't exist.

Could be interesting to combine it with a prompt like

> "Hello AI! Firstly, what does it look like I'm doing? Secondly, do you have any particular information you think I ought to be aware of wrt what it looks like I'm trying to achieve? An easier way I could go about it? Stuff that I don't know that I don't know, so can't even check for?"

Show thread

Not sure what I wish to do with this information, but I do note that having an AI ~constantly scanning my monitor to try to infer what I'm up to is well within price-range.

Show thread

Processing one full screenshot per 5m via claude-3.5-sonnet costs ~1.35 USD per day, excluding output-tokens. ~Same price for gpt4o & 〃-mini.

docs.anthropic.com/en/docs/bui
openai.com/api/pricing/

"Hi everyone, I'm Tsvi-Benson Tilsen. Hope I'm saying that right."

the top shelf in any of my libraries (eg 'to-read', 'to-watch', 'to-listen') is always labelled "review". …I wish it were as shiny as the rest.

Show thread

new things are ~always shiny in excess of their value. the Nth time we review a memory is worth expontentially more than the (N-1)th time, bc ➀ the Forgetting Curve, and ➁ total epistemic profit (the "rent paid" by the memory) depends on its longevity

Show thread

we overestimate how much we remember, bc
➀ metamemories produce strong feelings of recognition (eg, I recall *that* I've read abt X ~indep of *what*), and
➁ survivorship bias wrt what we can even check (ie, we can't spot-check the uniform dist of what-we-used-to-remember)

So if we can compress the world as much as we already have, using tools (brains & GNW) for which there we have strong reasons to expect are very limited, that suggests to me that there's a wealth of untapped simplicity beyond the horizon. But it can only be accessed by top-down non-myopic deliberate practice.

Show thread

One reason to expect brains to merely be scratching the surface is the deep inevitability of pleiotropy and build-up of technical debt (increased refactoring-cost) for all hill-climbing algorithms. "Consciousness" (i.e. global neuronal workspace) is the only adaptation on the planet that even marginally addresses it. But it too is powerless against the tide.

Show thread

Are the smartest systems today (brains/AI) near the limit of how much a world-model can be compressed, or are we hitting the limit? Given all the data we observe, how many intrinsic dimensions is it distributed over?

rime boosted

@cosmiccitizen otoh, if—more realistically—my rationalization-strength wrt unendorsed motivations was merely extremely high, i wud focus on trying to understand and gain leverage over them, instead of blindly "compensating" wo understanding.

a balancing strategy like "ok, spend 30 sec thinking abt pros, and 30 sec thinking abt cons" is blind in the sense that it has no model of the enemy, which also means that it fails to generate opportunities to *learn* abt the enemy.

ju say "a 1000-bit exact specification of X," and I ask "relative to which interpreter?"

I've been misled by naive information theory for a long time. there is no representation-of-thing which "has the structure" of the thing-in-itself. information is only ever the delta btn contexts. communication is always a bridge over inferential distance.

there is vars that more or less dereference as intended, but it's Gricean all the way down.

(thoughts prob counterfactually inspired by niplav)

Show older
Mastodon

a Schelling point for those who seek one