@niplav Bard had same block last i chked, but was long ago.
idk if [Gemini Ultra](https://gemini.google.com/advanced) works but it's on my todo-list to try.
@niplav a note from July 2022, when i started taking seriously the idea of "aiming my thoughts at my own head". i called it "cleistogamy" (aka "self-fertilization", inspired by Holly re evobio again). plants can explore new territory faster if they don't hv to worry abt genetic compatibility w the center. ur OS is gonna be that much more efficient if u don't care abt cross-compatibility w existing paradigms (or smth).
it's got limited mathematical validity, but i love the metaphor.
whenever i think of "evidence" (aka likelihood-ratio) or "implication", i first imagine a filled-out possibility-space. now, if "A implies/evidentiates B", then the possibility-juice in A squishes into the A∩B-region. visualize the squishification!
iow, if A is evidence, then "P(B|A)>P(B|U)", where "U" is the universal set.
iow, if A is evidence, then B is more likely when u *know* u are anywhere in region A, compared to how likely B is when u are anywhere in U.
imo the hack to grokking Bayes is to focus on the likelihood-ratio. that's the only initially-counter-intuitive part.
and the likelihood-ratio is exactly the quantified version of material implication.
one of my favourite visualizations of an idea-ecosystem. i shud stare at it more often.
i wonder how it looks if u let producers charge individual prices based on perfect information abt what consumers wud be willing to pay (ie "price-gouging" on a per-consumer basis)? does the system learn faster? do consumers lose? what if all welfare-havers are both consumers and producers?
@niplav one day, friend! one day…
(tbc, i'm reluctant to mk public or send copy, bc A i think u wudn't benefit much, and B i wish to maintain a pure expectation of writing-for-myself.)
(also, in virtuous forecaster mode i predict i'm dead and nothing will come of any it; i think i cud prob mk some ladders, but i want go moon w minimal distracting stops btn. or smth.)
(btw this meme is v memetically unfit bc its not factoring out its topics. to survive in the memetic economy, u hv to "niche-segment" ur outputs. now talking abt i can hold my breath for 4 mins straight is an example of the opposite of niche-segmentation. niche-segmentation is unintuitive but much-much important pattern evywhere, so i rec grokking it!)
@niplav oh, oh, me! ...at least if u stretch definition of "mathematical" and "working on"…
@cosmiccitizen i call this "emotional compensation" in its general form. we don't BELIEVE whales exist; we just believe it, and can easily ignore (and get along with) "crazies" who think otherwise.
I actually think the brain uses both.
I think it's usually model B. I think effortfwl recall correlates w alpha oscillations bombarded across the cortex, and this reduces temperature in non-bombarded areas. Successfwl recall correlates w P300 EEG-signature, which according to Dehaene indicates a preponderance of inhibitory neurons.
But I also think I've had personal success w "deconcentration of concentration" to recall blocked Tip-of-the-Tongue memories. And this corresponds to model A!
B) FREE UP ENERGY FOR ORDER
Assume that the height of the threshold depends on the stability of sub-completions of m, because the pattern can't complete if its progress gets partially reset all the time.
Now, you can make it easier to recall m by lowering neighborhood noise. But if there are competing memories in that same neighborhood, you may end up accidentally stabilizing the wrong one.
A) STOCHASTIC RESONANCE
Assume that the transition to m depends on a threshold for total activation of m *relative to* competitive activity nearby.
Now, to recall m, simply add uniform noise to the general neighborhood of M. The noise is merely additive for neurons not in M, whereas it's *multiplicative* for M due to its loops. The noise relatively amplifies activity in recurrent networks, but only if they survive the noise (ie exceed the "error threshold" for mutation).
Assume memory m is an ordered state in recurrent cell assembly M.
M is usually in any of its many disordered states due to stochastic activity in its neighbourhood, but there's always a threshold for *pattern-similarity* to m (wrt phase/frequency of constituent neurons) above which its internal feedback loops suffice to autocomplete the rest of it.
Here are ways you can lower that threshold and likelify M's phase-transition into m →
How would you (in theory) manually store memories into a ferromagnetic material (cf Ising model)?
By "disguising" the memory-pattern with the energy signature of the naturally ordered state (eg uniform), from the perspective of the magnetic field.
"Recalling" the memory means lowering local temperature to the (critical) Curie point.
The "associative memories" to a ferromagnet X is anything that is *paramagnetic relative to* X.
Generalize to get Hopfield networks.
a nice thing abt feeling free continually develop my own idiolect wo anyone's permission, is that i can write my morning thoughts in 11 paragraphs instead of a big hecking book
i have no expectation that anyone can read and understand evything i'm saying here, but can think of it like "here's a demonstration of how u can write notes to urself; feel free to emulate to see if it works for u"
also, there are several logic-leaps here w lots of missing pieces even in my head, so read w salt
@niplav Orbs?! 👀 Did anyone mention orbs? Where are the orbs?
*frantically looks around, wand in hand*
Flowers are selective about what kind of pollinator they attract. Diurnal flowers use diverse colours to stand out in a competition for visual salience against their neighbours. But flowers with nocturnal anthesis are generally white, as they aim only to outshine the night.