We are committed to maintaining a safe, respectful, and inclusive environment for all users. This policy outlines how inappropriate content is identified, reviewed, and addressed on our platform.
To protect the community, content may be flagged in the following ways:
Automated Detection – Our systems use filters, AI models, and keyword monitoring to detect harmful, offensive, or prohibited material.
User Reports – Community members can report posts, comments, or messages they find inappropriate.
Moderator Review – Designated moderators may proactively review content for compliance with our Terms of Service and Community Guidelines.
We use a combination of automated tools and human review to ensure fairness and accuracy:
Automated Moderation
Detects spam, explicit language, and high-risk content instantly.
Automatically restricts or hides flagged content pending review.
Manual Moderation
Human moderators review flagged content for context and intent.
Decisions may include content removal, warnings, or account suspension (see Account Suspension Policy).
Appeals are reviewed by a different moderator to ensure impartiality.
Users play an active role in maintaining community standards. We provide:
Report Buttons – Available on posts, comments, and profiles for quick flagging.
Multiple Report Categories – Users can specify reasons (e.g., spam, harassment, hate speech, nudity, or illegal content).
Anonymous Reporting – Protects the privacy of the reporting user.
Follow-Up Notifications – Users may be notified when action is taken on their reports.
Depending on the severity of the violation, actions may include:
Content removal.
Temporary restrictions on posting or messaging.
Account suspension or permanent ban (see Suspension & Ban Policy).