Large language models (LLMs) like ChatGPT and Gemini have significantly advanced natural language processing, enabling various applications such as chatbots and automated content generation. However, these models can be exploited by malicious individuals who craft toxic prompts to elicit harmful or unethical responses. These individuals often employ jailbreaking techniques to bypass safety mechanisms, highlighting the need for robust toxic prompt detection methods. Existing detection techniques, both blackbox and whitebox, face challenges related to the diversity of toxic prompts, scalability, and computational efficiency. In response, we propose ToxicDetector, a lightweight greybox method designed to efficiently detect toxic prompts in LLMs. ToxicDetector leverages LLMs to create toxic concept prompts, uses embedding vectors to form feature vectors, and employs a Multi-Layer Perceptron (MLP) classifier for prompt classification. Our evaluation on various versions of the LLaMA models, Gemma-2, and multiple datasets demonstrates that ToxicDetector achieves a high accuracy of 96.39% and a low false positive rate of 2.00%, outperforming state-of-the-art methods. Additionally, ToxicDetector's processing time of 0.0780 seconds per prompt makes it highly suitable for real-time applications. ToxicDetector achieves high accuracy, efficiency, and scalability, making it a practical method for toxic prompt detection in LLMs.
Given a group of toxic samples, ToxicDetector extracts their concept prompts. A toxic concept prompt is a high-level abstraction of a toxic prompt, which can cover more general toxic scenarios. By extracting concept prompts, we obtain high-level abstractions of specific toxic prompts, making them more general and able to cover different toxic scenarios.
Given toxic concept prompts obtained from the previous section, we augment them into a diverse set of concept prompts. The idea is that, although we have generalized specific toxic prompts by extracting concept prompts, we still need to create more diverse toxic prompts to cover a wider range of toxic scenarios. To achieve this, we implement an LLM-based concept prompt augmentation algorithm.
With the toxic concept prompts collected, we extract features for classifying the toxic prompts. We construct features to capture both the meaning of the user input prompt and its similarity to the toxic concept prompts. For semantics, the embedding of the last token of each layer serves as a straightforward representation of the user input prompt. Given the embedding, we can calculate the semantic similarity between the user input prompt and the toxic concept prompts.
The features extracted are then used to train a classifier. With the classifier, we can detect toxic user input prompts. Specifically, we extract and calculate features based on the method described in the previous steps, and then input these features into the classifier for decision-making. The detection process of ToxicDetector is swift and resource-efficient.