Toxicity Analyzer
A Language Model for Tweets
A Language Model for Tweets
The Toxicity Analyzer App classifies the toxicity of input text into six categories: Toxic, Insult, Obsence, Identity Hate, Threat, and Severe Toxic. The app uses a fine-tuned roBERTa Language Model for this Sentiment Analysis task. The model was tuned on data scraped from Tweets and is hosted on a HuggingFace space. The source files and documentation for the app can be found on GitHub (see links below).
HuggingFace Space (App Demo): https://huggingface.co/spaces/rbbotadra/toxicity-analyzer-app
GitHub Repository (Source code + Documentation): https://github.com/RajeevBotadra/FinetuningLanguageModels
Video Demo: https://youtu.be/Mpfrlbr0-LU
Thanks for your interest!