The evolution of human-computer interaction reflects our deep desire to extend the body into virtual spaces seamlessly. From the mouse to touchscreens, the journey has been marked by a push toward simplicity, accessibility, and natural engagement. Yet, as our understanding of gestures and spatial presence grows, we’re moving beyond tools like mice or joysticks, exploring direct and intuitive ways to translate human movement into digital environments.
Through experimentation, we’ve tested and refined the ability to transform gestures into accurate digital representations, breaking from conventional interfaces to prioritize intuitive action. This progression has allowed us to map physical movements with exceptional precision, enabling a new way of interacting with 3D spaces. By capturing spatial experiences and analyzing hybrid bodies —— whether through Kinect systems or realistic camera captures —— we’ve begun to understand how different representations of the body impact virtual environments. This exploration enhances spatial quality and redefines how we inhabit and interact with virtual worlds.
The first step involved capturing skilled gestures using the Kinect V2 skeleton tracking. A simple hurdles game was developed, where users could physically dodge blocks approaching them. This experiment demonstrated the Kinect's strength in recognizing gestures but revealed limitations in accurately capturing spatial positioning.
Recognizing the need for improved spatial accuracy, the team explored an alternative approach. They implemented a dual-webcam system to map the environment in both XZ and YZ axes. This setup allowed for a more precise 3D representation of user movements, enhancing the interaction experience.
Recognizing the gesture of arranging objects involves understanding the balance between motion and stability. When placing an object, the primary goal is to bring it to rest in a controlled manner. Rapid placement often results in instability, causing the object to fall or misalign. To achieve a successful arrangement, steady and deliberate hand movements are crucial, allowing for precise control and alignment. This forms the foundation for designing this system that mimics or enhances human gestures in tasks requiring careful placement.
Test 1- With faster positioning
Test 2- With steady positioning
In the interpolated skeleton graph, joints act as nodes and bones as edges, mapping the body's movements in space. When you crouch with your hands positioned below the knee, a virtual "brick" is generated and adheres to your right hand, symbolizing interaction with an object. If you hold your hand steady for a moment, the brick transitions to a resting position, reflecting stability in the system. By crossing your arms, you activate the physics engine, triggering dynamic interactions that simulate real-world forces and motion, bridging the gap between gesture recognition and physical simulation.
The projects IN and OUT critically examine the challenges posed by contemporary digital tools in the architectural design process. They interrogate how these tools, while prioritizing efficiency and productivity, often fail to encapsulate the essence of architectural craft. By streamlining processes, they shield users from the laborious, tactile engagement inherent in traditional practices, creating a disconnection between designer and craft. This echoes what Daniel describes in "Builders of Vision" as a modern revival of the "Albertian split," a historical division between conceptual design and physical making. The projects seek to highlight these fractures, questioning whether our reliance on digital tools ultimately reshapes or diminishes the integrity of architectural practice.