Call For Papers:
We invite submissions on topics that include, but are not limited to, the following:
A. Multimodal Safety, Hate Speech & Social Media
Multimodal sentiment analysis in regional languages
Hate content video detection in regional languages
Trolling, offensive, and cyberbullying detection in memes and social media discourse
Multimodal data fusion and representation for hate speech detection in regional languages
Benchmark datasets and evaluation for multimodal hate speech in regional languages
Fake news, stance, and political deception detection in low-resource multimodal settings
Data collection and annotation methodologies for safer social media in low-resourced languages
Content moderation and cybersecurity strategies in regional/low-resource languages
Domestic violence and harmful discourse detection in online platforms
Understanding protests and emergency response through multimodal social media analysis
Emotion recognition using multimodal datasets
Multimodal depression detection in online discourse
Suicidal intent detection in code-mixed memes and discourse
Multimodal approaches for stress and mental health analysis
C. Bias, Fairness & Inclusivity
Multimodal cinematic content analysis for gender bias detection
Fairness, bias, and inclusivity in multimodal low-resource research
Multimodal embeddings and architectures
Multimodal dataset creation and annotation
Feature extraction and feature fusion techniques
Transfer learning and cross-modality adaptation
Generative models for multimodal data augmentation
Advanced prompting techniques for multimodal inference in low-resource languages
Handling hallucination and improving context-aware multimodal generation
E. Retrieval, Generation & Applications
Text-to-image retrieval and generation
Text-to-video generation
Text-to-sign language and video-based sign language translation
Multimodal recommender systems
Multimodal question answering and summarization in low-resource languages
Multimodal content generation
Pretraining and fine-tuning multimodal models for low-resource contexts
Resource-efficient and scalable multimodal systems
Multimodal machine translation for low-resource languages
Speech-to-text and speech-to-speech multimodal systems
Evaluation metrics for multimodal systems in low-resource settings
Multimodal approaches for endangered language preservation
Visual and audio-text alignment for indigenous or heritage language resources
Disease identification and diagnosis from multimodal data
Multimodal AI applications in healthcare, agriculture, manufacturing, and entertainment