Link: http://arxiv.org/abs/2501.16727v1
PDF Link: http://arxiv.org/pdf/2501.16727v1
Summary: Safety alignment mechanism are essential for preventing large language models(LLMs) from generating harmful information or unethical content.
However,cleverly crafted prompts can bypass these safety measures without accessing themodel's internal parameters, a phenomenon known as black-box jailbreak.
Existing heuristic black-box attack methods, such as genetic algorithms, sufferfrom limited effectiveness due to their inherent randomness, while recentreinforcement learning (RL) based methods often lack robust and informativereward signals.
To address these challenges, we propose a novel black-boxjailbreak method leveraging RL, which optimizes prompt generation by analyzingthe embedding proximity between benign and malicious prompts.
This approachensures that the rewritten prompts closely align with the intent of theoriginal prompts while enhancing the attack's effectiveness.
Furthermore, weintroduce a comprehensive jailbreak evaluation framework incorporatingkeywords, intent matching, and answer validation to provide a more rigorous andholistic assessment of jailbreak success.
Experimental results show thesuperiority of our approach, achieving state-of-the-art (SOTA) performance onseveral prominent open and closed-source LLMs, including Qwen2.
5-7B-Instruct,Llama3.
1-8B-Instruct, and GPT-4o-0806.
Our method sets a new benchmark injailbreak attack effectiveness, highlighting potential vulnerabilities in LLMs.
The codebase for this work is available athttps://github.
com/Aegis1863/xJailbreak.
Published on arXiv on: 2025-01-28T06:07:58Z