Large Language Models as Span Annotators

1Charles University, Prague, Czechia
2ETH Zurich, Switzerland
3Edinburgh Napier University, Scotland, United Kingdom
4trivago N.V., Düsseldorf, Germany
5TU Darmstadt, Germany
Direct assessment vs. span annotation

LLM-as-a-judge methods: direct assessment (traditional) vs. span annotation (focus of this paper).

Abstract

For high-quality texts, single-score metrics seldom provide actionable feedback. In contrast, span annotation—pointing out issues in the text by annotating their spans—can guide improvements and provide insights. Until recently, span annotation was limited to human annotators or fine-tuned encoder models. In this study, we automate span annotation with large language models (LLMs). We compare expert or skilled crowdworker annotators with open and proprietary LLMs on three tasks: data-to-text generation evaluation, machine translation evaluation, and propaganda detection in human-written texts. In our experiments, we show that LLMs as span annotators are straightforward to implement and notably more cost-efficient than human annotators. The LLMs achieve moderate agreement with skilled human annotators, in some scenarios comparable to the average agreement among the annotators themselves. Qualitative analysis shows that reasoning models outperform their instruction-tuned counterparts and provide more valid explanations for annotations. We release the dataset of more than 40k model and human annotations for further research.

How to cite us

@misc{kasner2025large,
    title={Large Language Models as Span Annotators}, 
    author={Zdeněk Kasner and Vilém Zouhar and Patrícia Schmidtová and Ivan Kartáč and Kristýna Onderková and Ondřej Plátek and Dimitra Gkatzia and Saad Mahamood and Ondřej Dušek and Simone Balloccu},
    year={2025},
    eprint={2504.08697},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2504.08697}, 
}