Show newer

a cartoon representation, a reified egregore endowed with artificial life, embodied in the will of the publishing organization and brought about by its apparatus in the form of the artists, possesses the essence of the hyperreal. it is a simulacra, but a living one, not static

Show thread

once distilled from the material world, it takes on a life of its own, becoming increasingly detached from the base reality; "man's best friend" is more Dog than many individual canines out there

how many children, do you think, care more about cartoon dogs than real ones?

Show thread

if the dog-symbol, the imitation of a dog, it's simulacra, is seen as "more real" than an instance of the class which generated it, well, that is hyperreality; that which is more real than the real. how can this be?

a symbol emerges from reality as an egregore, a shared concept

Show thread

would a person with this dog-concept acknowledge a miserable angry mutt as a dog? or would they see it as something unreal, monstrous, missing an essential part of its nature?

how would they see a perfect robotic canine companion? more or less dog?

Show thread

many people consider the idea of "man's best friend" to be integral to the dog-concept, inextricable. a dog, to them, is by definition that which is their partner. what of an unfriendly dog, one which either thru nature or nurture is not interested in this?

is this dog a dog?

Show thread

we must also consider, however, the role that semiotics plays here; the idea of a dog, the symbol which represents the animal, is not a single universal constant, but a fragmented thing, constructed in many places in many ways. one's idea of a dog may be different from another's

Show thread

consider, for ex., a robot dog, and a drawing of a dog; clearly neither has full depth, but the former has just a bit more. it can generate a wider variety of "dog" experiences, including those not explicitly recorded.

if the robot contained an AGI, it would be even deeper

Show thread

the system itself has maximum depth, by definition, as the yardstick against which the simulation is measured

the simulacra has minimum depth, only capable of pre-determined responses

most simulations lie somewhere between these two points

Show thread

the dimension along which these points lie is one of depth; how much is there "inside"? what complexity of causal mechanism is contained within the boundaries of the system, that which generates the information flux?

consider that this is a question with an objective answer

Show thread

the difference between simulacra & simulations is a quantitative, not qualitative one, as the former is a special case of the latter, the degenerate case of internal emptiness. on the opposite end lies the system itself, which is also a simulation; a 100% accurate one, of itself

Show thread

for a simulacra, a hologram, is but a very shallow simulation; a simulation nonetheless, but a static one, comprised of cached states, limited in the ways it can respond to observations. it cannot simulate an interaction with the base system which was not recorded into it prior.

Show thread

consider a fake tree; in many ways, it accurately replicates the experience of interacting with a real one. it looks similar, you can climb it, lean against it, sit beneath it. you can take pictures of it, throw things at it, punch it, admire it. but the similarities end there.

Show thread

a holograph is a cache, a recording of an object observed in a specific way. it can be arbitrarily complex, potentially encoding a v large subset of a system's state space

a simulacra is one such; a holograph of a system, replicating the experience of it, the surface impressions
---
RT @Plinz
Base level reality is the inevitable causal structure that gives rise to all observable causal patterns. Simulations are recreations of observable cau…
twitter.com/Plinz/status/13915

as such, this problem is really testing how willing one is to slowly think through the possibilities, rather than jumping to a short-term satisfying but long-term suboptimal solution.

and just like the marshmallow experiment, it's really a test of trust in the problem statement

Show thread

you have a superintelligence powerful enough to outthink you 10 times out of 10, and you can't have certainty as to whether your attempt at deception will be used against you; most people's intuitions are not equipped to handle this situation, nor naive decision theory either tbh

Show thread

this experiment combines aspects of the Veil of Ignorance & Roko's Basilisk; to solve it correctly, you need to figure out a way to TRULY believe that 1boxing is the correct answer, accepting that deception is impossible, as you don't know if you're being simulated currently.

Show thread

the Predictor would have to be a superintelligence capable of scanning, digitizing, & simulating you in a sufficiently convincing representation of the real world that the copy believes its the original, trying to make a decision. if done properly, the result should be identical.

Show thread

imo the correct path of reasoning here is revealed by the question statement: that the Predictor has never been wrong. how could this be the case? the answer is telling: this could only be the case if they could simulate you perfectly, ie, use the exact same reasoning as you will

Show thread

the problem is effectively designed to capture intuitive reasoning, which typically fails to arrive at the presumably "correct" answer of 1box; it attempts to demonstrate the utility of formal reasoning/decision theory in certain situations

(stole this variant from a🔒🐦)

Show thread
Show older
Mastodon

a Schelling point for those who seek one