Skip to content
arxiv papers 1 min read

Securing Educational LLMs: A Generalised Taxonomy of Attacks on LLMs and DREAD Risk Assessment

Link: http://arxiv.org/abs/2508.08629v1

PDF Link: http://arxiv.org/pdf/2508.08629v1

Summary: Due to perceptions of efficiency and significant productivity gains, variousorganisations, including in education, are adopting Large Language Models(LLMs) into their workflows.

Educator-facing, learner-facing, andinstitution-facing LLMs, collectively, Educational Large Language Models(eLLMs), complement and enhance the effectiveness of teaching, learning, andacademic operations.

However, their integration into an educational settingraises significant cybersecurity concerns.

A comprehensive landscape ofcontemporary attacks on LLMs and their impact on the educational environment ismissing.

This study presents a generalised taxonomy of fifty attacks on LLMs,which are categorized as attacks targeting either models or theirinfrastructure.

The severity of these attacks is evaluated in the educationalsector using the DREAD risk assessment framework.

Our risk assessment indicatesthat token smuggling, adversarial prompts, direct injection, and multi-stepjailbreak are critical attacks on eLLMs.

The proposed taxonomy, its applicationin the educational environment, and our risk assessment will help academic andindustrial practitioners to build resilient solutions that protect learners andinstitutions.

Published on arXiv on: 2025-08-12T04:34:12Z