Most VR input right now is, in the most complicated, a series of linear curve mappings. Pull the trigger inwards and the value goes from 0 to 1. Put a finger near the button and the glow does the same lerp, maybe with an easing curve. Connecting them together is the next step
This does seem like the kind of task that AI *should* be uniquely suited to, but the problem is that for that you need an extraordinary amount of data - the problem more or less needs to be solved already, which defeats the point. Once built though... parametric perturbation and coming up with good/bad values for each generated variation would let you make AI models that iteratively feel better than the human bootstrapped one