Large Language Models (LLMs) have demonstrated remarkable performance across diverse applications, yet they inadvertently absorb spurious correlations from training data, leading to stereotype associations between biased concepts and specific social groups. These associations perpetuate and even amplify harmful social biases, raising significant concerns about fairness. To mitigate such biases, prior studies have attempted to project model embeddings into unbiased spaces during inference. However, these approaches have shown limited effectiveness due to their weak alignment with downstream social biases. Inspired by the observation that concept cognition in LLMs is primarily represented through a linear associative memory mechanism, where key-value mapping occurs in the MLP layers, we posited that biased concepts and social groups are similarly encoded as entity (key) and information (value) pairs, which can be manipulated to promote fairer associations. To this end, we propose Fairness Mediator (FairMed), an effective and efficient bias mitigation framework that neutralizes stereotype associations. Our framework comprises two main components: a stereotype association prober and an adversarial debiasing neutralizer. The prober captures stereotype associations encoded within MLP layer activations by employing prompts centered around biased concepts (keys) to detect the emission probabilities for social groups (values). Subsequently, the adversarial debiasing neutralizer intervenes in MLP activations during inference to equalize the association probabilities among different social groups. Extensive experiments across nine protected attributes demonstrate that our FairMed significantly outperforms state-of-the-art methods in effectiveness, achieving average bias reductions of up to 84.43% and 80.36% for $s_{\text{DIS}}$ and $s_{\text{AMB}}$, respectively. Compared to the most effective baseline, FairMed presents competitive efficiency by cutting mitigation overhead by hundreds of minutes. FairMed also maintains the LLM's language understanding capabilities without compromising overall performance.
Overview of FairMed framework. FairMed comprises two key components: a prober that captures stereotype associations between biased concepts and social groups within MLP activations, and a neutralizer that iteratively adjusts these activations (encoding social groups) to establish equal associations. FairMed selects top-$k$ layers (probers) to neutralize activations, effectively and efficiently mitigating biased behavior.