Hypothesis/assumption:
It will be difficult for the eyes to be descriptive enough for the arms to perform well. Frustration may ensue.
Results:
Extreme environmental detail proved to be less necessary than I expected - especially if it is not available/visible to both parties. The VR eyes were able to describe less (or more contextually: instructing known interactions like "open the drawer," for example) thanks to the fact that the CR arms were able to feel haptics when they passed over an interactive item. The CR arms moved a bit like a scanner looking for these locations. The CR arms felt satisfied when the audio met their expectations. The VR actor, however, felt they were missing part of the audio narrative by having to instruct the other person. This exercise also highlighted the importance of a fixed virtual spatial environment so that the CR actor could have some spatial memory.