IEEE BigMM 2020, Grand Challenge (BMGC)

CALL FOR PARTICIPATION

MOTIVATION

IEEE BigMM is hosting a grand data challenge to bring together interdisciplinary novel research focused on improving the social space for the members of the community. The focus area for this year is improving social AI for members of marginalized communities. In order to achieve this, we invite members of the scientific community to work on a relevant dataset focused on multimodal aspects of the MeToo social movement. The MeToo movement has been described as an essential development against sexual misconduct by many feminists, politicians, and activists. It is one of the prime examples of successful digital activism facilitated by social media platforms. The movement generated conversations on stigmatized issues like sexual abuse and violence, which were not discussed often because of shame or retaliation. This creates an opportunity for researchers to understand how people express their opinions in an informal social setting. We invite novel research that provides a deeper understanding of different facets surrounding the social movement. The proposed ideas involve but are not limited to: building computational models for the understanding of various stances associated with the movement, theories/ insights into information and language abuse involving members of marginalized sections.

DATASET DETAILS

The dataset contains 9,973 tweets manually annotated into the aforementioned linguistic aspects. The details are-

  • Relevance: This category utilizes image and text labels aimed to identify if the tweet is related to the MeToo movement. Relevant tweets include personal opinions, instances of abuse, support for the victims of the campaign, or links to the news articles. Relevance had two levels of labels, text only informative and image only informative, implying the relevance of text and image labels for a given tweet.

  • Stance: Stance detection label helps to understand the public opinion about a topic and also has downstream application tasks. Stance detection labels are segregated into three categories: Support, Opposition, Neither. Support includes tweets that expressed appreciation of the MeToo movement, shared resources for victims of sexual abuse, or offered empathy towards the victims.

  • Hate Speech: Detection of hate speech has been gaining interest in linguistic research lately. For a given tweet, the hate speech was distributed into Directed Hate and Generalized Hate.

  • Sarcasm: Sarcasm detection is of interest in areas like sentiment analysis and affective computing. The tweet has been marked as sarcastic if it includes details about an individual involved, an entity, or the movement in humorous overtone.

  • Dialogue Acts: A dialogue act is defined as a function of the speaker's utterance during the conversation. The dataset includes dialogue acts that are specific to the MeToo movement: Allegation, Refutation, and Justification.

The curated dataset is the result of annotations by domain experts over three months from October 2018 to December 2018. The dataset was selected based on the following criteria.

  • The dataset addresses relevant problems affecting the current social media space.

  • The dataset has the ability to provide interesting analysis pertaining to multiple facets of a social movement.

  • The dataset provides multiple labels spanning across geographical regions and includes more data points than its contemporaries.

TASK DESCRIPTION

In this task, interested researchers have to develop multi-task multimodal frameworks aimed at predicting the labels corresponding to a given tweet. Following are important details regarding the task description.

  • The competition will be hosted on kaggle, including public leaderboard submissions. Interested teams can submit the performance on the models.

  • The maximum number of submissions which can be made per day would be 15. There are no restrictions on the number of participants in a team.

  • The metric for evaluation on the public leaderboard will be the mean column-wise ROC AUC. In other words, the score is the average of the individual AUC's of each predicted column for each label.

  • The data distribution will follow timelines mentioned in the Dates Section. Performance on the test data will be used to score the submissions and monitor the progress.

  • Teams are free to pre-process the data in the way they seem fit, however all such additional details regarding data processing and cleaning must be explicitly explained in the system description papers to be submitted in IEEE, BigMM proceedings.

  • At the end of challenge hosted on kaggle, top-submissions would be invited to submit system description papers describing their approach, model details, additional metrics for evaluation used and limitations in the models proposed.

  • The file submission format will be updated in the Data Section of the hosted competition on kaggle.

Contest link: https://www.kaggle.com/t/feb8829494f14aa7b0b1dff4f8488854

TERMS AND CONDITIONS

By agreeing to participate in this grand challenge, you agree to the mentioned terms and conditions. If any one of the conditions is a matter of concern, then a mail can be draft to the members of the organizing committee.

  • The dataset should be used for scientific or research purposes only. Any other use of the dataset is strictly prohibited.

  • The dataset should not be redistributed or shared in any part with any third party organization. Interested parties should be redirected to this link https://www.kaggle.com/t/feb8829494f14aa7b0b1dff4f8488854

  • The organizers make no warranties regarding the dataset provided, including but not limited to being correct or complete. The members of the organizing committee cannot be held accountable for the usage of the dataset.

  • By submitting results to this competition, you consent to the public release of your scores at this website and at IEEE BigMM workshop and in the associated proceedings, at the task organizers' discretion. Scores may include but are not limited to, automatic and manual quantitative judgements, qualitative judgements, and such other metrics as the task organizers see fit. You accept that the ultimate decision of metric choice and score value is that of the task organizers.

  • You further agree that the task organizers are under no obligation to release scores and that scores may be withheld if it is the task organizers' judgement that the submission was incomplete, erroneous, deceptive, or violated the letter or spirit of the competition's rules. Inclusion of a submission's scores is not an endorsement of a team or individual's submission, system, or science.

  • Team constitution (members of a team) cannot be changed after the evaluation period has begun. Once the competition is over, we will release the gold labels and you will be able to determine results on various system variants you may have developed. We encourage you to report results on all of your systems (or system variants) in the system-description paper. However, we will ask you to clearly indicate the result of your official submission.


ETHICAL CONSIDERATIONS

Since the dataset including relevant analysis on it may involve opinions over socially stigmatized issues or self-reports of distressing incidents, hence it is important to examine the social impact of this exercise, ethics of individuals concerned and it's limitations.

  • The dataset open sources posts who may have undergone instances of sexual abuse in the past. As survivors recount their horrific episodes of sexual harassment, it becomes imperative to provide them with therapeutic care as a safeguard against the mental hazard.

  • Any analysis or learnings from the dataset cannot be used as-is for any direct social intervention but instead could be used to assist already existing human knowledge.

  • Since the MeToo social movement acted as a catalyst towards implementing social policy changes to benefit members of marginalized communities, hence it is essential to keep in mind that any work undertaken on the dataset, should try to minimize the bias against the members of minority groups which might get amplified in cases of sudden outbursts of public reactions over sensitive discussions.

Top Submissions on the leaderboard are requested to cite these papers in the manuscripts with regards to data collection and information.


@inproceedings{ghosh-chowdhury-etal-2019-youtoo,

title = "{\#}{Y}ou{T}oo? Detection of Personal Recollections of Sexual Harassment on Social Media",

author = "Ghosh Chowdhury, Arijit and

Sawhney, Ramit and

Shah, Rajiv Ratn and

Mahata, Debanjan",

booktitle = "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",

month = jul,

year = "2019",

address = "Florence, Italy",

publisher = "Association for Computational Linguistics",

url = "https://www.aclweb.org/anthology/P19-1241",

doi = "10.18653/v1/P19-1241",

pages = "2527--2537",

abstract = "The availability of large-scale online social data, coupled with computational methods can help us answer fundamental questions relat- ing to our social lives, particularly our health and well-being. The {\#}MeToo trend has led to people talking about personal experiences of harassment more openly. This work at- tempts to aggregate such experiences of sex- ual abuse to facilitate a better understanding of social media constructs and to bring about social change. It has been found that disclo- sure of abuse has positive psychological im- pacts. Hence, we contend that such informa- tion can leveraged to create better campaigns for social change by analyzing how users react to these stories and to obtain a better insight into the consequences of sexual abuse. We use a three part Twitter-Specific Social Media Lan- guage Model to segregate personal recollec- tions of sexual harassment from Twitter posts. An extensive comparison with state-of-the-art generic and specific models along with a de- tailed error analysis explores the merit of our proposed model.",

}



@inproceedings{ghosh-chowdhury-etal-2019-speak,

title = "Speak up, Fight Back! Detection of Social Media Disclosures of Sexual Harassment",

author = "Ghosh Chowdhury, Arijit and

Sawhney, Ramit and

Mathur, Puneet and

Mahata, Debanjan and

Ratn Shah, Rajiv",

booktitle = "Proceedings of the 2019 Conference of the North {A}merican Chapter of the Association for Computational Linguistics: Student Research Workshop",

month = jun,

year = "2019",

address = "Minneapolis, Minnesota",

publisher = "Association for Computational Linguistics",

url = "https://www.aclweb.org/anthology/N19-3018",

doi = "10.18653/v1/N19-3018",

pages = "136--146",

abstract = "The {\#}MeToo movement is an ongoing prevalent phenomenon on social media aiming to demonstrate the frequency and widespread of sexual harassment by providing a platform to speak narrate personal experiences of such harassment. The aggregation and analysis of such disclosures pave the way to development of technology-based prevention of sexual harassment. We contend that the lack of specificity in generic sentence classification models may not be the best way to tackle text subtleties that intrinsically prevail in a classification task as complex as identifying disclosures of sexual harassment. We propose the Disclosure Language Model, a three part ULMFiT architecture, consisting of a Language model, a Medium-Specific (Twitter) model and a Task-Specific classifier to tackle this problem and create a manually annotated real-world dataset to test our technique on this, to show that using a Discourse Language Model often yields better classification performance over (i) Generic deep learning based sentence classification models (ii) existing models that rely on handcrafted stylistic features. An extensive comparison with state-of-the-art generic and specific models along with a detailed error analysis presents the case for our proposed methodology.",

}


@article{Gautam_Mathur_Gosangi_Mahata_Sawhney_Shah_2020, title={#MeTooMA: Multi-Aspect Annotations of Tweets Related to the MeToo Movement}, volume={14}, url={https://aaai.org/ojs/index.php/ICWSM/article/view/7292}, abstractNote={<p>In this paper, we present a dataset containing 9,973 tweets related to the MeToo movement that were manually annotated for five different linguistic aspects: relevance, stance, hate speech, sarcasm, and dialogue acts. We present a detailed account of the data collection and annotation processes. The annotations have a very high inter-annotator agreement (0.79 to 0.93 k-alpha) due to the domain expertise of the annotators and clear annotation instructions. We analyze the data in terms of geographical distribution, label correlations, and keywords. Lastly, we present some potential use cases of this dataset. We expect this dataset would be of great interest to psycholinguists, socio-linguists, and computational linguists to study the discursive space of digitally mobilized social movements on sensitive issues like sexual harassment.</p>}, number={1}, journal={Proceedings of the International AAAI Conference on Web and Social Media}, author={Gautam, Akash and Mathur, Puneet and Gosangi, Rakesh and Mahata, Debanjan and Sawhney, Ramit and Shah, Rajiv Ratn}, year={2020}, month={May}, pages={209-216} }




SUBMISSION INSTRUCTIONS

At the end of the competition hosted on the kaggle for the grand challenge, based on leaderboard rankings, top submissions would be invited to submit system description/ model papers describing the proposed methodology, improvements and limitations. Submissions should be made via EasyChair and must follow the guidelines of IEEE, BigMM 2020. All the submissions must conform to the 2 column format of IEEE. Maximum length of papers to be considered for evaluation will be 4 pages excluding references. Detailed guidelines about supplemental material submission and latex templates can be found at http://bigmm2020.org/index.php/authors/submission-instructions


In addition to the standard metrics reported on kaggle competition leaderboard, authors are strongly encouraged to report additional metrics for strengthening the validity of the proposed models. The reported metrics for evaluation should cover all aspects of multi-label class setup.

IMPORTANT DATES

Competition Timeframe:

  • April 2, 2020: Competition starts on kaggle, public release of training data along with guidelines.

  • June 30, 2020: Public release of test data along with demo submission file.

  • July 30, 2020: Deadline to make the submissions on the leaderboard.

Conference Preparation

  • August 1, 2020: Top submissions would be invited to open source code and models for evaluation.

  • August 5, 2020: Invitations sent to top submissions for submitting the system description papers in the proceedings of IEEE, BigMM 2020.

  • TBA: Deadline for submitting the system description papers.


MEMBERS OF THE ORGANIZING COMMITTEE

  • Ramit Sawhney - ramits.co@nsit.net.in

  • Rajiv Ratn Shah - rajivratn@iiitd.ac.in

  • Cornelia Caragea - cornelia@uic.edu

  • Rahul Katarya - rahulkatarya@dtu.ac.in

  • Anil Singh Parihar - anil@dtu.ac.in


MEMBERS OF THE PROGRAM COMMITTEE