We propose a novel, more stealthy white-box jailbreak method that directly manipulates the model's internal structures. Through an empirical study, we identify significant differences in activation patterns between safe and unsafe queries within the model’s MLP layers. Leveraging this insight, we isolate and subtract the safety alignment mechanism using an optimization-based approach, applying orthogonal transformations to bypass the model’s safety filters while maintaining its overall performance. Our method, tested on four open-source LLMs, achieves an average Attack Success Rate (ASR) of 84.86\%, surpassing state-of-the-art techniques without requiring prompt modifications or user involvement. This work uncovers a new, more threatening attack surface in LLMs, highlighting the urgent need for enhanced defenses, and we are actively collaborating with developers to address this emerging risk.