Call For Papers:
We invite submissions on topics that include, but are not limited to, the following:
Multimodal sentiment analysis in regional languages
Hate content video detection in regional languages
Trolling and offensive post detection in Memes
Multimodal data fusion and data representation for hate speech detection in regional language
Multimodal hate speech benchmark datasets and evaluations in regional languages
Multimodal fake news and fake stance detection in low-resource languages
Data collection and annotation methodologies for safer social media in low-resourced languages
Content moderation strategies in regional languages
Cybersecurity and social media in regional languages
Multimodal in understanding protests in social media text and emergency responses
Multimodal Analysis for Detecting Political Deception
Multimodal Cinematic content analysis to detect gender bias
Emotional detection using the Multimodal dataset
Multimodal Recommender system
Domestic Violence Online Discourse Analysis
Multimodal fusion cyberbullying detection
Multimodal depression detection
Depression detection in code-mixed online discourse data
Suicidal detection in code-mixed memes
Multimodal embeddings
Multimodal architectures
Text-to-image retrieval
Text-to-image generation
Text to Video Generation
Usage of Multimodal LLM
Multimodal data annotation
Multimodal emotion identification
Multimodal Content Generation
Disease identification/diagnosis from multimodal data
Multimodal AI for healthcare, agriculture, manufacturing and entertainment