Skip to content
arxiv papers 1 min read

Invisible Prompts, Visible Threats: Malicious Font Injection in External Resources for Large Language Models

Link: http://arxiv.org/abs/2505.16957v1

PDF Link: http://arxiv.org/pdf/2505.16957v1

Summary: Large Language Models (LLMs) are increasingly equipped with capabilities ofreal-time web search and integrated with protocols like Model Context Protocol(MCP).

This extension could introduce new security vulnerabilities.

We presenta systematic investigation of LLM vulnerabilities to hidden adversarial promptsthrough malicious font injection in external resources like webpages, whereattackers manipulate code-to-glyph mapping to inject deceptive content whichare invisible to users.

We evaluate two critical attack scenarios: (1)"malicious content relay" and (2) "sensitive data leakage" through MCP-enabledtools.

Our experiments reveal that indirect prompts with injected maliciousfont can bypass LLM safety mechanisms through external resources, achievingvarying success rates based on data sensitivity and prompt design.

Our researchunderscores the urgent need for enhanced security measures in LLM deploymentswhen processing external content.

Published on arXiv on: 2025-05-22T17:36:33Z