Detecting Toxic Text
How To Use
Under 'Demos'
Select a demo to view an example of a certain type of toxicity in action
Under 'Input Text'
Type the text you want to analyze for toxicity
Under 'Select Model'
My Models:
Toxicity - 1 Epoch : First version of my finetuned model
Toxicity - 8 Epochs : First version of my model with 8 epochs of training
Woxicity - Weighted : Final version of my model that implemented weights in order to correctly classify underrepresented categories
Base Model: DistilBERT Base Uncased (SST-2) - Analyzes text for either positive or negative content
Output
Hit the submit button to view the output
My Model:
Tweet (portion): The text that was inputted
Toxicity Class: toxic / severe_toxic / obscene / threat / insult / identity_hate
Probability: Probability of prediction
Base Model:
Tweet (portion): The text that was inputted
Result: POSITIVE / NEGATIVE
Probability: Probability of prediction
If Model Does Not Load
Visit huggingface site to use model / reload for site