@niplav What does indexical mean here?
@TetraspaceGrouping Since we can't really "make a difference", we should only care which universe "we" (in some intuitive sense) end up in.
Sure, there's some hell universe, but since heaven and hell universe both exist, I prefer me being in the heaven universe and some other version of myself ending up in the hell universe (this doesn't work if I conceive myself as "the me algorithm", since then I'm in both universes, but if instead I take a more matter-bound view of identity)
@TetraspaceGrouping I think I understand what you mean, that's a nice solution.
Do you agree with this rephrase:
Assume:
* An agent/a person is an algorithm
* All algorithms/mathematical structures are real, but the degree to which their real is weighted by how complex they are (more complex ones are less real)
* The thing I can be identified as is my algorithm (and as such, there is such a thing as "decision", and, in some sense, "free will")
@TetraspaceGrouping
Given those assumptions, normal ethics is reconstructed in the way you describe (and you don't have to worry about the equal and opposite thing happening, because your algorithm, except that it decides B, is more complex and therefore less real than me).
If the complexity metric isn't given, then you don't get that result – the me-algorithm that outputs A is just as real as the me-algorithm-except-B that then outputs B.