Skip to content
arxiv papers 1 min read

Evil twins are not that evil: Qualitative insights into machine-generated prompts

Link: http://arxiv.org/abs/2412.08127v1

PDF Link: http://arxiv.org/pdf/2412.08127v1

Summary: It has been widely observed that language models (LMs) respond in predictableways to algorithmically generated prompts that are seemingly unintelligible.

This is both a sign that we lack a full understanding of how LMs work, and apractical challenge, because opaqueness can be exploited for harmful uses ofLMs, such as jailbreaking.

We present the first thorough analysis of opaquemachine-generated prompts, or autoprompts, pertaining to 3 LMs of differentsizes and families.

We find that machine-generated prompts are characterized bya last token that is often intelligible and strongly affects the generation.

Asmall but consistent proportion of the previous tokens are fillers thatprobably appear in the prompt as a by-product of the fact that the optimizationprocess fixes the number of tokens.

The remaining tokens tend to have at leasta loose semantic relation with the generation, although they do not engage inwell-formed syntactic relations with it.

We find moreover that some of theablations we applied to machine-generated prompts can also be applied tonatural language sequences, leading to similar behavior, suggesting thatautoprompts are a direct consequence of the way in which LMs process linguisticinputs in general.

Published on arXiv on: 2024-12-11T06:22:44Z