With the expanding application of Large Language Models (LLMs) in various domains, it becomes imperative to comprehensively investigate their unforeseen behaviors and consequent outcomes. In this study, we introduce and systematically explore the phenomenon of “glitch tokens”, which are anomalous tokens produced by established tokenizers and could potentially compromise the models’ quality of response. Specifically, we experiment on seven top popular LLMs utilizing three distinct tokenizers and involving a totally of 182,517 tokens. We present categorizations of the identified glitch tokens and symptoms exhibited by LLMs when interacting with glitch tokens. Based on our observation that glitch tokens tend to cluster in the embedding space, we propose GlitchHunter, a novel iterative clustering-based technique, for efficient glitch token detection. The evaluation shows that our approach notably outperforms three baseline methods on eight open-source LLMs. To the best of our knowledge, we present the first comprehensive study on glitch tokens. Our new detection further provides valuable insights into mitigating tokenization-related errors in LLMs.
We present an illustrative example that sheds light on the erratic behavior induced by the glitch token " TheNitrome" in Text-davinci-003, a product of OpenAI. This figure juxtaposes the model's responses when subjected to minimal changes, specifically, the removal of a space. To enhance clarity in the figure below, we utilize varied colors to distinguish between different tokens.
We also provide an example of glitch token 'stackexchange' in Llama2-7b-chat. This figure stacks the various responses of the model when we present a common sense question with nothing after, a normal token after and a glitch token after to the model. To elaborate clearly, we differently colored the token at the end.