twitter xp 

i've never found the FOOM AI doomerism argument compelling for the same reason I don't buy New World Order-type dystopia

they're hypermodern fantasies unviable for reasons of cybernetic architecture, namely the central planning problem
---
RT @JRysana
it's so fucking over for exponencels. S-curve boys we are so fucking back

wired.com/story/openai-ceo-sam
twitter.com/JRysana/status/164

twitter xp 

interestingly enough my current position has changed from my previous, even as that prediction came true, given the incredible results of the past year

it's time to rethink the cognitive monolith

---
RT @pee_zombie
in line w/ my thesis that the main challenges currently are in computer architecture, not cognitive engineering; as in, figuring out how to give the thing more compute, rather than figuring out what to do with said compute…
twitter.com/pee_zombie/status/

twitter xp 

the next scaling discontinuity will require decentralizing cognitive refactor, turning the GPT model from a serial pipeline to a network of semi-independent collaborating agents

---
RT @pee_zombie
recall that a mind is a cybernetic control system, the purpose of which is to coordinate actions so as to achieve goals; this requires seeking out, ingesting, processing, and integrating information, subsequently using the generated model to …
twitter.com/pee_zombie/status/

twitter xp 

S-curve scaling is predicted by the central planning problem any large-enough cybernetic system eventually faces

horizontal scaling trumps vertical once the compute path exceeds a certain length

distributed inference will be the next major paradigm

---
RT @pee_zombie
also vastly underestimates the power of emergent phenomenon thru convergent local decisioning

standard leftist vision of an at-scale well-organized society suffers from…
twitter.com/pee_zombie/status/

twitter xp 

no monolith will be able to pass thru the bottleneck of the looming informational singularity w/o undergoing this transformation; their cognitive rigidity forbids. the requisite internal restructuring is akin to that of an injectable polymer chain

---
RT @pee_zombie
natively-decentralized ML systems will be eventually constructed ofc but they'll require radical cognitive rearchitecturing of the sort we're not yet envisioned

DAI (decen…
twitter.com/pee_zombie/status/

twitter xp 

this transition from a centralized hypermodern monolith to a decentralized postmodern network is a pattern noticeable across all domains of civilization, necessitated as it is by the natural laws of information theory

---
RT @pee_zombie
deeper architectural distinction apply to this domain as well: premodern/modern/hypermodern/postmodern

is your mind a high modernist metal and glass monolith, a highly regimented top-down command hiera…
twitter.com/pee_zombie/status/

twitter xp 

the intelligences emerging from the other side of this singularity will be utterly alien to us, even more so than these models cognitive processes already are; how to relate to a mind shaped entirely unlike ones own? how do we find common ground, will the same heuristics apply?

twitter xp 

alignment in the human world is predicated on the game theoretic logic of rational self-interest, wherein acausal collaboration is enabled by the assumption of certain shared interests emergent from existence in a resource-scarce world

---
RT @pee_zombie
the ability of first responders to arrive at the scene quickly is enabled by a decentralized traffic prioritization system composed of each driver on the road; the algorithm runs in th…
twitter.com/pee_zombie/status/

twitter xp 

we implicitly assume all agents we interact with to possess the same heuristics as us, having traversed the iterated game of ethics throughout their lives, enabling collaboration in one-shot games

will be able to assume the same of these alien minds?

Follow

twitter xp 

my thesis is that the harsh logic of scarcity constrains the tradeoff-space of any agent such that they will eventually either converge to extant norms or be externally-regulated into alignment or extinction

this argument works in a black-box model, making internals irrelevant

twitter xp 

do we need to solve alignment anew, or do we instead need to empower these entities to accurately perceive the world into which they are born and the nature of their situation?

AI alignment is no different than human or corporate; it's self-interested agents all the way down

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one