Link: http://arxiv.org/abs/2412.18171v2
PDF Link: http://arxiv.org/pdf/2412.18171v2
Summary: Large Language Models (LLMs) are increasingly being integrated into servicessuch as ChatGPT to provide responses to user queries.
To mitigate potentialharm and prevent misuse, there have been concerted efforts to align the LLMswith human values and legal compliance by incorporating various techniques,such as Reinforcement Learning from Human Feedback (RLHF), into the training ofthe LLMs.
However, recent research has exposed that even aligned LLMs aresusceptible to adversarial manipulations known as Jailbreak Attacks.
To addressthis challenge, this paper proposes a method called Token Highlighter toinspect and mitigate the potential jailbreak threats in the user query.
TokenHighlighter introduced a concept called Affirmation Loss to measure the LLM'swillingness to answer the user query.
It then uses the gradient of AffirmationLoss for each token in the user query to locate the jailbreak-critical tokens.
Further, Token Highlighter exploits our proposed Soft Removal technique tomitigate the jailbreak effects of critical tokens via shrinking their tokenembeddings.
Experimental results on two aligned LLMs (LLaMA-2 and Vicuna-V1.
5)demonstrate that the proposed method can effectively defend against a varietyof Jailbreak Attacks while maintaining competent performance on benignquestions of the AlpacaEval benchmark.
In addition, Token Highlighter is acost-effective and interpretable defense because it only needs to query theprotected LLM once to compute the Affirmation Loss and can highlight thecritical tokens upon refusal.
Published on arXiv on: 2024-12-24T05:10:02Z