Gameplay:
Turn-based game where the point is to collide your controller with the other player’s controller while blindfolded.
Two blindfolded fighters take turns striking/dodging each other's controllers. They move according to directions coming from a third-party mediator.
Game ends when controllers collide.
Rules:
Every turn, the striking player tries to hit the opponent and the opponent tries to dodge the strike. Both players must move on every strike.
The mediator, who is wearing the VR headset, can only see the virtual representation of the controllers in the VR environment. Using only this information, she announces instructions for each turn. For example, on player one’s turn the mediator might say, “Move to the left. One, two, three, player one strike!”
While the facilitator’s goal overall is to guide the controllers to collision, the directions she gives are not necessarily for the striking player, nor do they have to be accurate in any way.
The origin:
1-2-3 Strike was created by participants in 2018's interreality workshop. The group was instructed to create a game that used the VR hardware the "wrong" way.
I liked:
That the game required no extra physical or virtual objects (apart from the VR itself), exposing some of the richness of interreal space. How could extra objects (virtual and digital alike) enhance this type of interaction?
forthcoming
Motion trace: Some people who played wanted to see the motion trace of the controllers in a playback of the game. I would like to try playing with live, cumulative motion traces on the controllers. They would eventually cloud the mediator's vision. This might change the mediator role to require more movement and exploration of the space.
Audio: Everyone in the game receives contextual audio cues that - whether they like it or not - inform the location of the other players. I'd like to try a version where the mediator cannot hear.