Skip to content
arxiv papers 1 min read

SoK: Understanding Vulnerabilities in the Large Language Model Supply Chain

Link: http://arxiv.org/abs/2502.12497v1

PDF Link: http://arxiv.org/pdf/2502.12497v1

Summary: Large Language Models (LLMs) transform artificial intelligence, drivingadvancements in natural language understanding, text generation, and autonomoussystems.

The increasing complexity of their development and deploymentintroduces significant security challenges, particularly within the LLM supplychain.

However, existing research primarily focuses on content safety, such asadversarial attacks, jailbreaking, and backdoor attacks, while overlookingsecurity vulnerabilities in the underlying software systems.

To address thisgap, this study systematically analyzes 529 vulnerabilities reported across 75prominent projects spanning 13 lifecycle stages.

The findings show thatvulnerabilities are concentrated in the application (50.

3%) and model (42.

7%)layers, with improper resource control (45.

7%) and improper neutralization(25.

1%) identified as the leading root causes.

Additionally, while 56.

7% of thevulnerabilities have available fixes, 8% of these patches are ineffective,resulting in recurring vulnerabilities.

This study underscores the challengesof securing the LLM ecosystem and provides actionable insights to guide futureresearch and mitigation strategies.

Published on arXiv on: 2025-02-18T03:22:38Z