Skip to content
arxiv papers 1 min read

Exploiting the Index Gradients for Optimization-Based Jailbreaking on Large Language Models

Link: http://arxiv.org/abs/2412.08615v1

PDF Link: http://arxiv.org/pdf/2412.08615v1

Summary: Despite the advancements in training Large Language Models (LLMs) withalignment techniques to enhance the safety of generated content, these modelsremain susceptible to jailbreak, an adversarial attack method that exposessecurity vulnerabilities in LLMs.

Notably, the Greedy Coordinate Gradient (GCG)method has demonstrated the ability to automatically generate adversarialsuffixes that jailbreak state-of-the-art LLMs.

However, the optimizationprocess involved in GCG is highly time-consuming, rendering the jailbreakingpipeline inefficient.

In this paper, we investigate the process of GCG andidentify an issue of Indirect Effect, the key bottleneck of the GCGoptimization.

To this end, we propose the Model Attack Gradient Index GCG(MAGIC), that addresses the Indirect Effect by exploiting the gradientinformation of the suffix tokens, thereby accelerating the procedure by havingless computation and fewer iterations.

Our experiments on AdvBench show thatMAGIC achieves up to a 1.

5x speedup, while maintaining Attack Success Rates(ASR) on par or even higher than other baselines.

Our MAGIC achieved an ASR of74% on the Llama-2 and an ASR of 54% when conducting transfer attacks onGPT-3.

5.

Code is available at https://github.

com/jiah-li/magic.

Published on arXiv on: 2024-12-11T18:37:56Z