Retracing the Past: LLMs Emit Training Data When They Get Lost

We introduce Confusion-Inducing Attacks (CIA), a principled method that maximizes model uncertainty to induce memorized text emission during divergence, providing a systematic way to evaluate memorization risks in aligned and unaligned LLMs.

Authors

Myeongseob Ko

Nikhil Reddy Billa

Adam Nguyen

Charles Fleming

Ming Jin

Ruoxi Jia

Published

August 20, 2025

Abstract

The memorization of training data in large language models (LLMs) poses significant privacy and copyright concerns. Existing data extraction methods, particularly heuristic-based divergence attacks, often exhibit limited success and offer limited insight into the fundamental drivers of memorization leakage. This paper introduces Confusion-Inducing Attacks (CIA), a principled framework for extracting memorized data by systematically maximizing model uncertainty. We empirically demonstrate that the emission of memorized text during divergence is preceded by a sustained spike in token-level prediction entropy. CIA leverages this insight by optimizing input snippets to deliberately induce this consecutive high-entropy state. For aligned LLMs, we further propose Mismatched Supervised Fine-tuning (SFT) to simultaneously weaken their alignment and induce targeted confusion, thereby increasing susceptibility to our attacks. Experiments on various unaligned and aligned LLMs demonstrate that our proposed attacks outperform existing baselines in extracting verbatim and near-verbatim training data without requiring prior knowledge of the training data. Our findings highlight persistent memorization risks across various LLMs and offer a more systematic method for assessing these vulnerabilities.