@TetraspaceGrouping Since we can't really "make a difference", we should only care which universe "we" (in some intuitive sense) end up in.

Sure, there's some hell universe, but since heaven and hell universe both exist, I prefer me being in the heaven universe and some other version of myself ending up in the hell universe (this doesn't work if I conceive myself as "the me algorithm", since then I'm in both universes, but if instead I take a more matter-bound view of identity)

@niplav I’m an algorithms identity person, I feel like there’s something metric-weight-y / universe-counting going on, like maybe 90% of mathematical objects with a slot for your algorithm would be heaven-universes if your algorithm outputs A, and 80% would be if that algorithm outputs B, so you ought to choose to take action A and that’s where ethics comes from (but also outside of time you choosing A is the fact of what your algorithm outputs and free will isn’t real?)

Follow

@TetraspaceGrouping I think I understand what you mean, that's a nice solution.

Do you agree with this rephrase:

Assume:
* An agent/a person is an algorithm
* All algorithms/mathematical structures are real, but the degree to which their real is weighted by how complex they are (more complex ones are less real)
* The thing I can be identified as is my algorithm (and as such, there is such a thing as "decision", and, in some sense, "free will")

@TetraspaceGrouping
Given those assumptions, normal ethics is reconstructed in the way you describe (and you don't have to worry about the equal and opposite thing happening, because your algorithm, except that it decides B, is more complex and therefore less real than me).

If the complexity metric isn't given, then you don't get that result – the me-algorithm that outputs A is just as real as the me-algorithm-except-B that then outputs B.

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one