Traditional robotic manipulation methods often involve exploring the entire manipulation space, which can be inefficient and time-consuming. An alternative approach, imitation learning (Learning from Demonstration, LfD), enables robots to learn tasks by observing expert demonstrations, allowing for generalization to new scenarios. This method is more efficient as it extracts key information from expert behavior and environment interactions. However, collecting quality demonstrations remains a challenge. To address this, the study proposes an immersive VR-based teleoperation setup for gathering demonstrations. The system uses an xArm7 robot with an Inspire robot hand, controlled via a Meta Quest 3 headset and SenseGlove, providing real-time haptic feedback. The proposed Haptic-ACT framework integrates RGB images, joint positions, and fingertip contact forces to enhance learning, particularly for soft object manipulation.