Skip to content
arxiv papers 1 min read

Adaptive Detoxification: Safeguarding General Capabilities of LLMs through Toxicity-Aware Knowledge Editing

Link: http://arxiv.org/abs/2505.22298v1

PDF Link: http://arxiv.org/pdf/2505.22298v1

Summary: Large language models (LLMs) exhibit impressive language capabilities butremain vulnerable to malicious prompts and jailbreaking attacks.

Existingknowledge editing methods for LLM detoxification face two major challenges.

First, they often rely on entity-specific localization, making them ineffectiveagainst adversarial inputs without explicit entities.

Second, these methodssuffer from over-editing, where detoxified models reject legitimate queries,compromising overall performance.

In this paper, we propose ToxEdit, atoxicity-aware knowledge editing approach that dynamically detects toxicactivation patterns during forward propagation.

It then routes computationsthrough adaptive inter-layer pathways to mitigate toxicity effectively.

Thisdesign ensures precise toxicity mitigation while preserving LLMs' generalcapabilities.

To more accurately assess over-editing, we also enhance theSafeEdit benchmark by incorporating instruction-following evaluation tasks.

Experimental results on multiple LLMs demonstrate that our ToxEdit outperformsprevious state-of-the-art methods in both detoxification performance andsafeguarding general capabilities of LLMs.

Published on arXiv on: 2025-05-28T12:37:06Z