@niplav What does indexical mean here?
@TetraspaceGrouping Since we can't really "make a difference", we should only care which universe "we" (in some intuitive sense) end up in.
Sure, there's some hell universe, but since heaven and hell universe both exist, I prefer me being in the heaven universe and some other version of myself ending up in the hell universe (this doesn't work if I conceive myself as "the me algorithm", since then I'm in both universes, but if instead I take a more matter-bound view of identity)
@niplav I’m an algorithms identity person, I feel like there’s something metric-weight-y / universe-counting going on, like maybe 90% of mathematical objects with a slot for your algorithm would be heaven-universes if your algorithm outputs A, and 80% would be if that algorithm outputs B, so you ought to choose to take action A and that’s where ethics comes from (but also outside of time you choosing A is the fact of what your algorithm outputs and free will isn’t real?)
@TetraspaceGrouping
Given those assumptions, normal ethics is reconstructed in the way you describe (and you don't have to worry about the equal and opposite thing happening, because your algorithm, except that it decides B, is more complex and therefore less real than me).
If the complexity metric isn't given, then you don't get that result – the me-algorithm that outputs A is just as real as the me-algorithm-except-B that then outputs B.