Thinking of multimodal VR input as a giant manifold the user's attention flows across where valleys smoothly embrace inputs as the benefit limit of one ends and the other begins. One could probably build this manually if they were obsessively single-minded enough.

Most VR input right now is, in the most complicated, a series of linear curve mappings. Pull the trigger inwards and the value goes from 0 to 1. Put a finger near the button and the glow does the same lerp, maybe with an easing curve. Connecting them together is the next step

Follow

This does seem like the kind of task that AI *should* be uniquely suited to, but the problem is that for that you need an extraordinary amount of data - the problem more or less needs to be solved already, which defeats the point. Once built though... parametric perturbation and coming up with good/bad values for each generated variation would let you make AI models that iteratively feel better than the human bootstrapped one

Sign in to participate in the conversation
Mastodon

a Schelling point for those who seek one